Jobs
Interviews

1710 Artifactory Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

155.0 years

0 Lacs

mumbai, maharashtra, india

Remote

Position Title Lead Infrastructure Engineer - Data Platforms Function/Group Digital and Technology Location Mumbai Shift Timing General Role Reports to D&T Manager – Cloud & Data Platforms Remote/Hybrid/in-Office Hybrid About General Mills We make food the world loves: 100 brands. In 100 countries. Across six continents. With iconic brands like Cheerios, Pillsbury, Betty Crocker, Nature Valley, and Häagen-Dazs, we’ve been serving up food the world loves for 155 years (and counting). Each of our brands has a unique story to tell. How we make our food is as important as the food we make. Our values are baked into our legacy and continue to accelerate us into the future as an innovative force for good. General Mills was founded in 1866 when Cadwallader Washburn boldly bought the largest flour mill west of the Mississippi. That pioneering spirit lives on today through our leadership team who upholds a vision of relentless innovation while being a force for good. For more details check out http://www.generalmills.com General Mills India Center (GIC) is our global capability center in Mumbai that works as an extension of our global organization delivering business value, service excellence and growth, while standing for good for our planet and people. With our team of 1800+ professionals, we deliver superior value across the areas of Supply chain (SC) , Digital & Technology (D&T) Innovation, Technology & Quality (ITQ), Consumer and Market Intelligence (CMI), Sales Strategy & Intelligence (SSI) , Global Shared Services (GSS) , Finance Shared Services (FSS) and Human Resources Shared Services (HRSS).For more details check out https://www.generalmills.co.in We advocate for advancing equity and inclusion to create more equitable workplaces and a better tomorrow. Job Overview Function Overview The Digital and Technology team at General Mills stands as the largest and foremost unit, dedicated to exploring the latest trends and innovations in technology while leading the adoption of cutting-edge technologies across the organization. Collaborating closely with global business teams, the focus is on understanding business models and identifying opportunities to leverage technology for increased efficiency and disruption. The team's expertise spans a wide range of areas, including AI/ML, Data Science, IoT, NLP, Cloud, Infrastructure, RPA and Automation, Digital Transformation, Cyber Security, Blockchain, SAP S4 HANA and Enterprise Architecture. The MillsWorks initiative embodies an agile@scale delivery model, where business and technology teams operate cohesively in pods with a unified mission to deliver value for the company. Employees working on significant technology projects are recognized as Digital Transformation change agents. The team places a strong emphasis on service partnerships and employee engagement with a commitment to advancing equity and supporting communities. In fostering an inclusive culture, the team values individuals passionate about learning and growing with technology, exemplified by the "Work with Heart" philosophy, emphasizing results over facetime. Those intrigued by the prospect of contributing to the digital transformation journey of a Fortune 500 company are encouraged to explore more details about the function through the provided Link Purpose of the role The Digital and Technology team of General Mills India Centre is looking for a MS SQL Server / Cloud database administrator (DBA) with Operations-oriented/DevOps skill set to support Cloud/On Prem databases. This opportunity is in a fast-paced environment as we migrate and refactor enterprise-scale databases from our on-premises datacenters to a cloud environment. The ideal candidate will have experience operating in a fast-paced, complex, and multi-platform environment and will contribute to the strategic direction of our database infrastructure. Key Accountabilities Provide technical leadership for migrating existing Microsoft SQL Server and NoSQL Databases to Public Cloud, developing and owning migration processes, documentation, and tooling. This includes defining strategies and roadmaps for database migrations, ensuring alignment with overall IT strategy and leveraging automation wherever possible to streamline the process. Expertly administer and manage mission-critical, complex, and high-volume Database Platforms in a 24/7 environment, implementing and maintaining automated monitoring and alerting systems to proactively identify and address potential issues. Proactively identify and address potential database performance bottlenecks, proposing and implementing automated solutions to optimize database efficiency and scalability. This includes developing and deploying automated scripts for performance tuning and optimization. Lead the design and implementation of highly available and scalable database solutions in the cloud (GCP preferred), ensuring compliance with security and governance standards and utilizing Infrastructure as Code (IaC) for automated provisioning and management of database infrastructure. Administer and troubleshoot SQL Server, PostgreSQL, MySQL, and NoSQL DBs (MongoDB), implementing best practices and resolving complex performance issues through automated scripting and tooling. Develop and maintain comprehensive documentation for database systems, processes, and procedures. Champion the adoption of DevOps practices, including CI/CD, infrastructure as code (Terraform), and automation (Ansible, Python, PowerShell) to streamline database management and deployment. Actively participate in the development and improvement of automated deployment pipelines. Collaborate with application stakeholders to understand their database requirements and provide technical guidance on database design and optimization, emphasizing automation opportunities to improve development workflows. Contribute to the development and implementation of database security policies and procedures, ensuring compliance with industry best practices and leveraging automation for security auditing and vulnerability management. Actively participate in Agile development processes, contributing to sprint planning, daily stand-ups, and retrospectives, focusing on automation opportunities to improve team efficiency and reduce manual effort. Mentor and guide junior team members, fostering a culture of knowledge sharing and continuous learning, particularly around automation and DevOps practices. Stay abreast of emerging technologies and trends in database administration and cloud computing, recommending and implementing innovative solutions to improve database performance and reliability, with a focus on automation and efficiency gains. Minimum Qualifications 9+ years of hands-on experience with leading design, refactoring, or migration of databases in cloud infrastructures and services for at least one of Microsoft Azure, Amazon Web Services, and Google GCP (preferred). Experience with automating database migration processes is highly desirable. Experience maintaining and administering with CI/CD tools such as Ansible, GitHub, and Artifactory in a cloud environment and developing/writing scripts using advanced DevOps languages such as Python, PowerShell. Demonstrated ability to design and implement automated workflows. Experience working with infrastructure as code such as Terraform, or equivalent. Proven ability to automate infrastructure provisioning and management. Experience with concepts, processes & tools required for cloud adoption including cloud security, governance, and integration. Experience with automating security and compliance tasks is a plus. Experience with SQL Server AlwaysOn and Windows clustering. Experience with automating the management of high-availability clusters is preferred. Experience with agile techniques and methods. Familiarity with user expectations for cloud Databases (to be able to design user-centric engineering services – e.g., provisioning self-service workflow). Experience with automating user provisioning and access management is a plus. Working knowledge of DevOps, Agile development processes, exploration, and POCs. Ability to work collaboratively across functional team boundaries. Preferred Qualifications Good understanding & hands-on experience of Linux OS and Windows Server (2012+). Experience working with high availability, disaster recovery, backup strategies, and server tuning strategies including parameters, resources, contention, etc. Excellent interpersonal and communication skills. Ability to work in a fast-paced team environment. Flexibility, reliability, initiative, responsibility, and a "can-do" mentality.

Posted 1 day ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. Join our Infrastructure Engineering Enablement Team as a Lead Engineer, where you’ll drive innovation with AI/ML, automation, and cloud technologies. As the Lead Infrastructure Engineer at JPMorgan Chase within the Infrastructure Engineering Enablement Team, you will play a crucial role in guiding application teams to effectively perform infrastructure deployment and management functions. You will leverage your expertise in public cloud technologies, AI/ML, and automation to enhance developer capabilities and promote robust cloud-based solutions. This position offers the opportunity to collaborate with product and engineering teams, manage platform issues, and implement best practices for public cloud processes, ensuring minimal downtime and optimal performance. Create /Train LLM models that provide required knowledge to the developers to perform the tasks. Develop automation scripts to make day to day job easier Collaborate with product and engineering teams to deliver robust cloud-based solutions that drive enhanced customer experiences. Own end-to-end platform issues, problem management & help provide solutions to platform production issues on the AWS Cloud & ensure the applications are available as expected. Guide various product teams on the standards and best practices related to the Public Cloud process and help them mitigate issues in production cloud with minimal downtime. Job Responsibilities Promote, self-service, and deliver on a strategy to operate on a build broad use of Amazon's utility computing web services (e.g., AWS EC2, AWS S3, AWS RDS, AWS CloudFront, AWS EFS, CloudWatch, EKS) Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of AWS tools and services Develop/enhance LLM models using AI/ML skills for enabling self service for developers or other teams requiring Infrastructure Information. Identify opportunities to improve resiliency, availability, secure, high performing platforms in Public Cloud using JPMC best practices. Improve reliability, quality, and reduce to time to resolve issues in production incidents on software applications in prod. Implement continuous process improvement, including but not limited to policy, procedures, and production monitoring and reduce time to resolve. Identify, coordinate, and implement initiatives/projects and activities that create efficiencies and optimize technical processing. Provide primary operational support and engineering for the public cloud platform. Show leadership for any production issue and manage all the corresponding team in working towards fix and also should ensure minimal customer impact. Promote work streams to ensure Applications meet strict operational readiness for Public Cloud On-boarding. Monitor metrics and program health, anticipate and clear blockers, manage escalations. Revise the heading to 'Job responsibilities'. Consolidate bullet points in this section. Required Qualifications Formal training or certification on Infrastructure concepts and 5+ years applied experience A strong understanding of business technology drivers and their impact on architecture design, performance and monitoring, best practices A dynamic individual with excellent communication skills, who can adapt verbiage and style to the audience at hand and deliver critical information in a clear and concise message. The candidate must be a strong analytical thinker, with business acumen and the ability to assimilate information quickly, with a solution-based focus on incident and problem management. 10+ years’ experience across the SDLC process - Design and/or Development and/or support 5+ years’ experience/knowledge building or supporting environments on AWS using Terraform Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins Experience/Knowledge using monitoring solutions like CloudWatch, Prometheus, Datadog Experience/Knowledge of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform Experience with one or more public cloud platforms like AWS, GCP, Azure Preferred Qualifications, Capabilities And Skills Experience with one or more AI/ML languages such as Python, Scala and have developed, trained, LLM models. Ability to leverage Splunk and Dynatrace to identify and troubleshoot issues. Experience with high volume, mission critical applications, and building upon messaging and or event-driven architectures. Knowledge of container platforms such as Docker and Kubernetes. Strong understanding of architecture, design, and business processes Keen understanding of financial and budget management, control and optimization of Public Cloud expenses Strong communication and collaboration skills

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Role Overview: As a member of the team, your primary responsibility will be to maintain and manage our production infrastructure hosted on AWS. You will be involved in creating pipelines to deploy and manage global infrastructure efficiently. Your role will also include analyzing complex system behavior, performance, and application issues, as well as developing observability, alerts, and run books. Linux systems administration, configuration, troubleshooting, and automation will be part of your daily tasks. Additionally, you will be required to perform capacity analysis and planning, traffic routing, and implementing security policies. Proactively identifying and resolving technology-related bottlenecks will be crucial, showcasing your high level of technical expertise within one or more technical domains. Key Responsibilities: - Maintain production infrastructure on AWS - Create pipelines for deploying and managing global infrastructure - Analyze complex system behavior and performance issues - Develop observability, alerts, and run books - Perform Linux systems administration, configuration, troubleshooting, and automation - Conduct capacity analysis, traffic routing, and implement security policies - Identify and solve technology-related bottlenecks - Demonstrate technical expertise in one or more technical domains Qualifications Required: - Hands-on experience with Linode and handling Baremetal - Experience in Linux/UNIX systems administration - Solid experience provisioning public cloud resources using Terraform - Familiarity with technologies like Redis, Elasticache, etc. - Hands-on experience with Containers/Orchestration (Docker/Kubernetes) - Experience with CI/CD tools such as Argo CD - Proficiency in automation platforms like Jenkins and Artifactory - Networking experience in large cloud environments - Experience in a high-volume or critical production service environment - Knowledge of observability tooling like Datadog, Grafana, and CloudWatch Please note that the job is full-time and permanent, and the work location is in person. Feel free to apply if you are an immediate joiner and meet the qualifications mentioned above.,

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a senior software engineer on the Infrastructure, Planning and Process Team (IPP) at NVIDIA, you will play a crucial role in accelerating AI adoption across various engineering workflows within the company. Your primary responsibility will be to design and implement AI-driven solutions that enhance developer productivity, accelerate feedback loops, and improve the reliability of releases. You will work with a global team to cater to the infrastructure and software development workflow needs of various departments within NVIDIA. **Key Responsibilities:** - Design and implement AI-driven solutions across software development lifecycles to enhance developer productivity, accelerate feedback loops, and improve reliability of releases. - Develop and deploy AI agents to automate software development workflows and processes. - Measure and report on the impact of AI interventions, demonstrating improvements in key metrics like cycle time, change failure rate, and mean time to recovery (MTTR). - Build and deploy predictive models to identify high-risk commits, forecast potential build failures, and flag changes with a high probability of failures. - Research emerging AI technologies and engineering best practices to evolve the development ecosystem and maintain a competitive edge. **Qualifications Required:** - BE (MS preferred) or equivalent experience in EE/CS with 10+ years of work experience. - Deep practical knowledge of Large Language Models (LLMs), Machine Learning (ML), and Agent development. - Hands-on experience with Python, Java, and Go, with extensive Python scripting experience. - Experience working with SQL/NoSQL database systems such as MySQL, MongoDB, or Elasticsearch. - Full-stack, end-to-end development expertise, including proficiency in building and integrating solutions from front-end (e.g., React, Angular) to back-end (Python, Go, Java) and managing data infrastructure (SQL/NoSQL). - Familiarity with CI/CD setup tools such as Jenkins, Gitlab CI, Packer, Terraform, Artifactory, Ansible, Chef, or similar tools. - Understanding of distributed systems, microservice architecture, and REST APIs. - Ability to collaborate effectively across organizational boundaries to enhance alignment and productivity between teams. As an NVIDIAN, you will be part of a diverse and encouraging environment where innovation and excellence are valued. If you are passionate about new technologies, software quality, and want to contribute to the future of transportation and AI, join us to be a part of a team that is shaping the world.,

Posted 1 day ago

Apply

0 years

4 - 9 Lacs

surat, gujarat, india

On-site

Job Summary We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities Strong experience with essential DevOps tools and technologies, including Kubernetes, Terraform, Azure DevOps, Jenkins, Maven, Git, GitHub, and Docker. Hands-on experience in Azure cloud services, including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps, Jenkins, and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform, ARM Templates, or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring, logging, and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling, load balancing, and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing, compliance, and secure deployments. Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management, including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable, resilient, and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements Proven experience as a DevOps Engineer, Site Reliability Engineer, or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability, and performance. Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams. Skills:- Azure , DevOps, Jenkins, Kubernetes and Customer Service

Posted 1 day ago

Apply

2.0 years

0 Lacs

chennai

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Administer and optimize DevOps tools including GitHub, JFrog Artifactory, SonarQube, and others Ensure high availability, performance, and security of these platforms Develop automation scripts and pipelines to streamline platform operations and developer workflows Implement Infrastructure as Code (IaC) using tools like Terraform, Ansible, or similar Manage and monitor cloud environments (AWS, Azure, GCP) used for hosting DevOps tools and services Optimize resource usage and cost efficiency Apply SRE principles to ensure platform reliability, scalability, and observability Define and monitor SLIs/SLOs/SLAs for platform services Establish and enforce standards for tool usage, access control, and integration Collaborate with development teams to promote DevOps culture and practices Troubleshoot and resolve technical issues promptly and participate in incident and problem management, including on-call support Guide and mentor junior developers to foster team growth Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's degree in Computer Science, Engineering, or related field Certifications in cloud technologies or DevOps practices 2+ years of experience in DevOps, Platform Engineering, or SRE roles Hands-on experience with GitHub, JFrog Artifactory, SonarQube, Jenkins, and similar tools Experience with cloud platforms (AWS, Azure, GCP) Experience with Kubernetes and container orchestration Exposure to Agile and DevSecOps methodologies Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK, etc.) Solid scripting skills (Python, Bash, Groovy, etc.) Familiarity with CI/CD pipelines and Infrastructure as Code Proven excellent problem-solving and communication skills Future Looking Skills: Hands-on experience with API's like OpenAI, Google Gemini and Huggin face Good understanding of LLMs and generative models Proficiency with AI-Assisted Development Tools like GitHub Copilot, Cursor, Windsurf/Amazon Code Whisperer etc. How to effectively create prompts to get desired output from generative models At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 day ago

Apply

2.0 years

0 Lacs

noida

On-site

Job Type: Full Time Job Location: Noida Job Role: DevOps Engineer – Admin Jfrog Client: Domestic (Delhi based client) Experience required: To be eligible for this position, a minimum of 2-3 years of experience in Linux Administration is required. Project duration: 1 year. Renewal : Not known Client Onsite: Noida, Okhla phase 2. New Delhi. Employment Type: Full-Time / Payroll ( Project based clause) /Contractual ( I year). Notice period 45 days. Client Onsite: Noida, Okhla phase 2. New Delhi. Job Overview We are looking for a DevOps Engineer with 2–3 years of experience in building, managing, and automating DevOps pipelines and deployments on self-managed infrastructure. This role demands hands-on experience with at least 2 tools from the following stack: JFrog Artifactory SonarQube GitHub Enterprise The candidate should be comfortable working in Linux environments, automating tasks with scripts, and configuring the DevOps ecosystem at an infrastructure and pipeline level. Key Responsibilities & Expected Configuration Knowledge JFrog Artifactory, Xray and Advance security : Implement and manage vulnerability scanning for code, dependencies, containers, and binaries. Configure and enforce open-source license compliance policies across development workflows. Integrate security scans into CI/CD pipelines (GitLab CI, Jenkins, Azure DevOps, etc.). Use SAST, DAST, dependency scanning, and container scanning to detect risks early. Analyze and prioritize vulnerabilities using contextual risk and exploitability insights. Maintain and monitor compliance dashboards, audit logs, and governance controls. Collaborate with development teams to shift-left security and improve secure coding practices. Automate build and release blocking policies when critical issues are detected. Manage artifact scanning and security integration with repositories (Artifactory, GitLab). Provide reporting, remediation guidance, and security awareness to cross-functional teams. SonarQube : Configure SonarQube for Java/Maven (or .NET) projects Generate and analyze reports on code smells, vulnerabilities, bugs Enforce quality gates in Jenkins using sonarScanner CLI or plugin Set up project-level and global rulesets Manage access control and authentication GitHub Enterprise : Manage repositories, create branches, handle pull requests Configure branch protection rules and merge checks Implement webhook triggers to integrate with Jenkins Resolve merge conflicts and apply GitFlow or trunk-based workflows Linux & Scripting : Navigate and manage Linux file systems Write Bash, Python, or PowerShell scripts for automation Configure log rotation and cleanup for Jenkins, SonarQube, Artifactory Set up reverse proxies (Nginx/Apache) if needed Review and troubleshoot logs in /var/log, /opt/jenkins, or containers Tools & Technologies (Hands-on Expectation) : SCM: GitHub Enterprise Quality: SonarQube Artifacts: JFrog Artifactory and advance security Scripting: Bash, Python, PowerShell OS: Linux (Ubuntu/CentOS), Windows (for .NET if applicable) Build Tools: Maven, Gradle, dotnet CLI Minimum Requirements: 2–3 years total experience 2+ years hands-on with the following: SonarQube, JFrog Artifactory, GitHub Enterprise Clear understanding of DevOps workflows, not just tool usage Must be able to explain what they have configured and automated in each tool Preferred Skills (Nice to Have): Exposure to infrastructure-as-code tools (e.g., Ansible, Terraform) Awareness of DevSecOps practices Experience with monitoring tools (Grafana, Prometheus, Nagios) Experience integrating .NET Core apps (IIS or Kestrel hosting) Candidate Submission Instruction : To apply, candidates must: Include a detailed CV that lists DevOps tools used Clearly explain what configurations/implementations were done by them during the project (not their team) An extended CV Or a separate email/document that describes tool-by-tool hands-on experience Applications without actual hands-on configuration details will not be shortlisted. Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume * Allowed Type(s): .pdf, .doc, .docx, .rtf By using this form you agree with the storage and handling of your data by this website. *

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

Role Overview: As an HR IT Engineer in Application Development at Deutsche Bank's technology organization, you will be a part of the global technology group responsible for providing IT services for the Global HR Function. Your role will involve working in the Talent Value Stream as a Java Developer within an Agile Philosophy. You will collaborate with a cross-functional IT delivery team comprising business analysts, developers, and testers. Key Responsibilities: - Develop source code for all Software Components based on Detailed Software Requirements specification, functional design, and technical design document. - Verify the developed source code through reviews following the 4-eyes principle. - Contribute to quality assurance by writing unit and functional tests. - Implement architectural changes defined by Architects. - Provide Level 3 support for technical infrastructure components such as databases, middleware, and user interfaces. - Contribute to problem and root cause analysis. - Verify integrated software components through unit and integrated software testing as per the software test plan, resolving software test findings. - Ensure all code changes are documented in Change Items (CIs). - Develop routines for deploying CIs to target environments where applicable. - Support Release Deployments on non-Production Management controlled environments. - Create Software Product Training Materials, User Guides, and Deployment Instructions. - Check consistency of documents with the respective Software Product Release. - Manage maintenance of applications and perform technical change requests according to Release Management processes. - Fix software defects/bugs, analyze code for quality, and collaborate with colleagues in different stages of the Software Development Lifecycle. - Identify dependencies between software product components, technical components, applications, and interfaces. Qualifications Required: - Experience with Agile methodologies in Software Development experience (SDLC). - Designing, developing, and maintaining complex Enterprise applications. - Core Java experience including Data Structures, Algorithms, and Design Patterns. - Understanding of cloud and Platform as a service offerings. - Experience with modern SDLC tools like Git, JIRA, Bitbucket, Artifactory, Jenkins/TeamCity, Open Shift is a plus. - Test Driven Development experience. - Experience with SOAP or REST Services. - Cloud deployment experience with Openshift, Exposure to Docker and Kubernetes. - Strong analytical and communication skills. - Ability to keep pace with technical innovation. - Minimum 2 years of professional experience. Additional Company Details: Deutsche Bank fosters a culture of empowerment, responsibility, commercial thinking, initiative, and collaboration. The organization values continuous learning, progression, and a positive, fair, and inclusive work environment. Applications from all individuals are welcome to contribute to the success of Deutsche Bank Group.,

Posted 2 days ago

Apply

2.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Administer and optimize DevOps tools including GitHub, JFrog Artifactory, SonarQube, and others Ensure high availability, performance, and security of these platforms Develop automation scripts and pipelines to streamline platform operations and developer workflows Implement Infrastructure as Code (IaC) using tools like Terraform, Ansible, or similar Manage and monitor cloud environments (AWS, Azure, GCP) used for hosting DevOps tools and services Optimize resource usage and cost efficiency Apply SRE principles to ensure platform reliability, scalability, and observability Define and monitor SLIs/SLOs/SLAs for platform services Establish and enforce standards for tool usage, access control, and integration Collaborate with development teams to promote DevOps culture and practices Troubleshoot and resolve technical issues promptly and participate in incident and problem management, including on-call support Guide and mentor junior developers to foster team growth Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Engineering, or related field Certifications in cloud technologies or DevOps practices 2+ years of experience in DevOps, Platform Engineering, or SRE roles Hands-on experience with GitHub, JFrog Artifactory, SonarQube, Jenkins, and similar tools Experience with cloud platforms (AWS, Azure, GCP) Experience with Kubernetes and container orchestration Exposure to Agile and DevSecOps methodologies Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK, etc.) Solid scripting skills (Python, Bash, Groovy, etc.) Familiarity with CI/CD pipelines and Infrastructure as Code Proven excellent problem-solving and communication skills Future Looking Skills Hands-on experience with API's like OpenAI, Google Gemini and Huggin face Good understanding of LLMs and generative models Proficiency with AI-Assisted Development Tools like GitHub Copilot, Cursor, Windsurf/Amazon Code Whisperer etc. How to effectively create prompts to get desired output from generative models At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 days ago

Apply

3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Company Description Job Description An IT DevOps Engineer is an IT generalist and partly specialist who should have a wide-ranging knowledge of both development and operations, including coding, infrastructure management, system administration, and DevOps toolchains. Working style is: 'Going beyond a single role'. You will collaborate with development teams to design and implement efficient CI/CD pipelines based on Artifactory that enable the rapid and reliable delivery of software applications and infra platforms with high quality and in a secure way. You need to foster effective collaboration between development, operations, and QA teams. Act as a bridge between development and operations, ensuring smooth handover and knowledge sharing. Make sure understand customer needs, processes and value. Customer and DevOps needs to collaborate to design solutions and reach the same target (value of co-creation). Writing Software and Scripts to support Business Needs. Qualifications Strong knowledge of software development practices, including version control, build systems, and automated testing especially in the area of CI/CD and Artifactory Experience in ITSM tools like BMC for Incident Management, Problem Management, Change Management Proficiency in scripting and/ or coding languages such as Python, Bash, or PowerShell etc. Experience with configuration management tools and Infrastructure-as-Code (IaC) concepts. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Experience with infrastructure provisioning and configuration management tools (e.g., Terraform, Ansible). Analytical mindset with the ability to identify and resolve complex technical issues. Proficiency in troubleshooting and debugging software applications, infrastructure components, and deployment processes. Excellent interpersonal and communication skills, with the ability to collaborate effectively with cross-functional teams. Strong documentation skills to create clear and concise technical documentation. Information & Cybersecurity Operations, Tools and Applications and how they fit into the DevOps cycle. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). At least 3 years professional experience as an IT DevOps Engineer. Valid DevOps certification from certified institute is a plus. Good English language skills spoken and written (German is a plus). Additional Information Ready to take your career to the next level? The future of mobility isn’t just anyone’s job. Make it yours! Join AUMOVIO. Own What’s Next.

Posted 2 days ago

Apply

5.0 - 6.0 years

5 - 9 Lacs

bengaluru

Work from Office

Senior Developer role in the team working for Smart Grid projects. Understand, Analyze, and derive software requirements/software functional specification. Design, Development/Implementation, Unit Test, Testing, and delivery of work packages on time with high quality. Investigate and fix software defects found by test / review team to ensure product quality. Understanding of Power system and distributed network application added advantage. Use your skills to move the world forward! Overall, 5-6 years of experience as an Ansible Developer. B.E/B. Tech/M. Tech (Computer Science, Information Science, Electronics). Tools: GIT, Jira. Strong knowledge of Ansible (Playbooks, Roles, Inventory, Modules). Good knowledge/working experience with Python and Shell Scripting (an advantage). Strong debugging and troubleshooting skills for automation scripts and infrastructure issues. Good knowledge/working experience on GIT, JFrog Artifactory. Flexibility and the ability to learn new technologies quickly. Sound problem-solving skills. Good interpersonal, communication skills and good Team player. Can drive topics independently with minimal guidance and self-exploration .

Posted 2 days ago

Apply

6.0 - 9.0 years

6 - 10 Lacs

bengaluru

Work from Office

Python, ETL and DevOps - Developer Must to Have : Very Heavy on Python ETL and DevOps Details Requirements Proven understanding/experience of cloud networking, virtualization, storage, containers, serverless architecture/framework, cloud IAM controls and policies. Experience working with database virtualization products like Delphix and database replication using Oracle Goldengate.knowledge on AWS cloud services and technologiesapp deployments in Containers (Docker, EKS)Troubleshoot build and deploy failures, and facilitate resolution Good to have :Development tools (Git/Stash, Jenkins, Maven, Artifactory and UDeploy)Experienced in Agile SCRUM practicesUnderstanding of versioning and release practices (SCM), code collaboration practices (Git) .Proficiency with CI/CD tools, especially Jenkins.Hands-on experience building and deploying applications using a public cloud environment, preferably AWS Minimum Experience on Key Skills- 6 to 9 years General Expectation1) Must have Good Communication2) Must be ready to work in 10:30 AM to 8:30 PM Shift3) Flexible to work in Client Location Manyata or EGL, Bangalore4) Must be ready to work from office in a Hybrid work environment. Full Remote work is not an option5) Expect Full Return to office from Feb/Mar'25 Pre-Requisites before submitting profiles 1) Must have Genuine and Digitally signed Form16 for ALL employments 2) All employment history/details must be present in UAN/PPF statements 3) Candidate must be screened using Video and ensure he/she is genuine and have proper work setup 4) Candidates must have real work experience on mandatory skills mentioned in JD 5) Profiles must have the companies which they are having payroll with and not the client names as their employers 6) As these are competitive positions and client will not wait for 60 days and carry the risks of drop-out, candidates must of 0 to 3 weeks of Notice Period 7) Candidates must be screened for any gaps after education and during employment for genuineness of the reasons

Posted 2 days ago

Apply

3.0 - 8.0 years

9 - 19 Lacs

coimbatore

Work from Office

Responsibilities: Extensive expertise in mainframe technologies such as PL/I, JCL, VSAM, and CICS. Advanced understanding of database systems, particularly IDMS & DB2, and related tools. Proficient with CI/CD tools like Jenkins, Git, Artifactory, and Jira. Expertise in developing pipelines for CI/CD processes. Experience in working with different IDEs, particularly IDz. Skilled in conducting 'Impact Analysis' and 'Work Package Estimation' for complex PL/I applications. Should be well-versed in SOAP and REST concepts and have experience in developing RESTful services. In-depth knowledge of software development life cycle (SDLC) principles. Working knowledge of Traditional & Agile/Scrum methodologies.

Posted 2 days ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

bengaluru

Work from Office

Job Title: DevOps Engineer Location: Bangalore, KA Mode of Work: Work From Office (5 Days a Week) Job Type: Full-Time Department: Engineering/Operations About The Role : We are looking for a skilled DevOps Engineer to join our team in Bangalore . The ideal candidate will have hands-on experience with a range of technologies including Docker , Kubernetes (K8s) , JFrog Artifactory , SonarQube , CI/CD tools , monitoring tools , Ansible , and auto-scaling strategies. This role is key to driving automation, improving the deployment pipeline, and optimizing infrastructure for seamless development and production operations. You will collaborate with development teams to design, implement, and manage systems that improve the software development lifecycle and ensure a high level of reliability, scalability, and performance. Responsibilities: Containerization & Orchestration: Design, deploy, and manage containerized applications using Docker . Manage, scale, and optimize Kubernetes (K8s) clusters for container orchestration. Troubleshoot and resolve issues related to Kubernetes clusters, ensuring high availability and fault tolerance. Collaborate with the development team to containerize new applications and microservices. CI/CD Pipeline Development & Maintenance: Implement and optimize CI/CD pipelines using tools such as Jenkins , GitLab CI , or similar. Integrate SonarQube for continuous code quality checks within the pipeline. Ensure seamless integration of JFrog Artifactory for managing build artifacts and repositories. Automate and streamline build, test, and deployment processes to support continuous delivery. Monitoring & Alerts: Implement and maintain monitoring solutions using tools like Prometheus , Grafana , or others. Set up real-time monitoring, logging, and alerting systems to proactively identify and address issues. Create and manage dashboards for operational insights into application health, performance, and system metrics. Automation & Infrastructure as Code: Automate infrastructure provisioning and management using Ansible or similar tools. Implement Auto-Scaling solutions to ensure the infrastructure dynamically adjusts to workload demands, ensuring optimal performance and cost efficiency. Define, deploy, and maintain infrastructure-as-code practices for consistent and reproducible environments. Collaboration & Best Practices: Work closely with development and QA teams to integrate DevOps best practices into the software development lifecycle. Ensure a high standard of security and compliance within the CI/CD pipelines. Provide technical leadership and mentorship for junior team members on DevOps practices and tools. Participate in cross-functional teams to define, design, and deliver scalable software solutions. Debugging & Issue Resolution: Troubleshoot complex application and infrastructure issues across development, staging, and production environments. Apply root cause analysis to incidents and implement long-term fixes to prevent recurrence. Continuously improve monitoring and debugging tools for faster issue resolution.

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

pune, maharashtra, india

On-site

About Citco JOB DESCRIPTION Since the 1940s Citco has provided specialist financial services to alternative investment funds, investors, multinationals and private clients worldwide. With over 6,000 employees in 45 countries we pioneer innovative solutions that meet our clients’ evolving needs, and deliver exceptional service. Our continuous investment in learning means our people are among the best in the industry. And our corporate social responsibility programs provide meaningful and fulfilling work in the community. A career at Citco isn’t just a job – it’s an opportunity to excel in an environment that genuinely supports your personal and professional development. About The Role As a Cloud DevOps Engineer, you will be working in a cross-functional team that will be responsible for designing and implementing re-usable frameworks, APIs, CI/CD pipelines, infrastructure automation , test automation leveraging modern cloud native designs/patterns and AWS services. You will be part of a culture of innovation where you’ll use AWS/Azure services to help team solve business challenges such as rapidly releasing products/services to the market or building an elastic, scalable, cost optimized application. You will have the opportunity to shape and execute a strategy to build knowledge and broaden use of public cloud in a dynamic professional environment. Education, Experience and Skill Bachelor’s degree in Engineering, Computer Science, or equivalent. 3 to 5 years in IT or Software Engineering including 1 to 2 years in an cloud environment (AWS preferred). Minimum 2 years of DevOps experience. Experience with AWS Services: CloudFormation, Terraform, EC2, Fargate, ECS, Docker, Autoscaling ,ELB, Jenkins, CodePipeline, CodeDeploy, CodeBuild, CodeCommit / Git, RDS, S3, CloudWatch, Lambda, IAM, Artifactory, ECR Highly proficient in Python. Experience in setting up and troubleshooting AWS production environments. Experience in implementing end to end CI/CD Delivery pipelines. Experience working in an agile environment. Hands-on skills operating in Linux and Windows. Proven knowledge of application architecture, networking, security, reliability and scalability concepts; software design principles and patterns. Must be self-motivated and driven. Job Duties In Brief Implement end to end highly scalable, available and resilient cloud engineering solutions for infrastructure and application components using AWS. Implement CI/CD pipelines for infrastructure and applications. Write infrastructure automation scripts, templates and integrate with DevOps tools. Automate smoke test and integrate test automation scripts such as unit tests, integration tests, performance tests into the CI\CD process. Troubleshoot AWS environments. A challenging and rewarding role in an award-winning global business. The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all duties, responsibilities and skills. About You Position reports to the Development Lead under the Hedge Fund Accounting IT (HFAIT) department. HFAIT department manages the core accounting platform( Æxeo ®) and datawarehouse within Citco. The platform is used by clients globally and is the first true straight-through, proprietary front-to-back solution for hedge funds that uses a single database for all activities including order capture, position and P&L reporting and accounting. What We Offer Opportunities for personal and professional career development. Great working environment, competitive salary and benefits, and opportunities for educational support. Be part of an industry leading global team, renowned for excellence. Confidentiality Assured. Citco welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process.

Posted 2 days ago

Apply

7.0 - 10.0 years

10 - 20 Lacs

gurugram

Work from Office

The S&P WSO (Wall Street Office) is the industry leader in leveraged loan and high-yield solutions providing comprehensive products and services for portfolio management, servicing, reporting, and analysis. Our portfolio management solution gives immediate access to real-time reference and transactional data on various asset classes. Its functionality is tailored to handle the unique behavior of bank loans and compliance reporting required for complex structures. We have offices in different locations, India, Dallas, Manchester, who work towards common goal for developing WSO Software. We are closely working with the Business to align the product with business requirements. The Impact: This is a great time to be joining a truly global team on a great technology journey. This is a great product covering multi-asset classes in both a pre/post-trade capacity with real-time reporting functionality. If you want to be an integral part of this forward-thinking team with a drive to succeed and the opportunity to enhance your development career and expand your technical skill sets, then this is the role for you. Your challenge will be reducing the time to market for products without compromising quality, by using innovation and technical skills. Whats in it for you: Be a part of an industry leading, Fortune 500 company Be a part of GREAT PLACE TO WORK Certified firm Be a part of a People First organization that Values Partnership, Integrity, and Discovery to Accelerate Progress Develop and deliver industry-leading software solutions using cutting-edge technologies and the latest toolsets. Plenty of training and development programs that support continuous learning and skill enhancement. Build a fulfilling career with a truly global and leading provider of financial market intelligence, data, and analytics. Responsibilities: Play a key role in the development team to build high-quality, high-performance, scalable code Participate in complete SDLC process (design, development, and support) of tech solutions Active participation in all scrum ceremonies, follow AGILE best practices effectively. Produce technical design documents and conduct technical walkthroughs Document and demonstrate solutions using Technical design docs, diagrams and stubbed code. Work collaboratively with business partners to understand and clarify requirements. Collaborate effectively with technical and non-technical stakeholders. Design and develop industry-leading applications. Actively participate in the production issues resolutions. What Were Looking For: 7-10 years of application development experience on one or many of: o C#o .Net Coreo ASP.Net Coreo SQL Server/PostgreSQLo Windows/Linux environmento REST API Ability to work in team-oriented environment and can work independently. Experience with automated testing platforms, unit tests. Experience working in an Agile Development environment. Strong written and verbal communication and presentation skills Advanced Knowledge of OOP/OOD, Design patterns Understanding of Dependency Injection. The following experience would be advantageous: Integration technologies, especially with Web Services and Microservices Experience in creating and updating Nuget packages Experience using tools such as GitHub, Jira, Confluence, TFS, Artifactory, Splunk Experience in Python, Redis, RabbitMQ, Unity, SimpleInjector Experience working with global teams Knowledge of Financial domain Basic Qualifications: Bachelor/Master degree in Computer Science and/or Certified Development Program.

Posted 2 days ago

Apply

4.0 years

0 Lacs

hyderabad, telangana, india

On-site

About This Role Wells Fargo is seeking a Senior Software Engineer In This Role, You Will Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 5+ years of hands-on experience in building application using Java, Spring framework and Spring Boot Strong experience in designing and building microservices/ web services. Experience in Front end development experience with ReactJS or Angular JavaScript, NodeJS Experience working in Capital Markets / Investment banking. Knowledge on messaging service like Kafka, Solace, etc. Experience in working on relation database like Oracle, MS SQL Server, etc Knowledge on caching solutions like Redis, Ignite, Coherence Strong programming skills Working experience in cloud environment like PCF/OCP/Azure/ GCP Experience with CI/CD technologies such as Jenkins, GitHub, Artifactory, Sonar etc. Experience with Agile development methodologies such as SCRUM Excellent organizational, verbal, and written communication skill Job Expectations: Location: Bengaluru / Hyderabad Comfortable working in an Agile software delivery environment and desire to collaborate and work closely with cross-functional teams Experience of cloud migration of applications and Azure/GCP certification will be an added advantage. Evaluation of latest tools and technology and onboarding them to improve productivity. Posting End Date: 15 Sep 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-484648-2

Posted 2 days ago

Apply

4.0 - 7.0 years

0 Lacs

india

On-site

Role Overview: Big Data Engineer responsible for analyzing, coding, testing, and implementing Big Data Applications (API & Batch), along with managing and mitigating incidents. Experience: 4–7 Years | Employment Type: Full-time (FTE)hybrid Required Qualifications: 4–7 years of experience in analysis, coding, implementation, and testing of Big Data Applications (API & Batch) on AWS Cloud using Java/Scala/Kotlin, Cassandra, Python, Hadoop, HDFS, Spark, MapReduce . Strong hands-on coding experience in Scala, PySpark, and Java for REST API and SOAP web services. Proficient in AWS (Batch & API microservices) using EMR, EC2, ECS, S3, Airflow, Step Functions, API Gateway . Skilled in Batch Application performance tuning (Spark/Hadoop/EMR, Java, Python). Solid Linux experience (Shell scripting/Python). Preferred Qualifications: Exposure to DevOps tools : GIT, GitHub, Bitbucket, GitLab, IntelliJ IDEA, PyCharm. Experience with CI/CD pipelines (Maven, Gradle, Jenkins, Artifactory, AWS CodeCommit, CloudFormation). Key Skills: PySpark | Scala / Java | AWS | Jenkins Job Type: Full-time Work Location: In person

Posted 2 days ago

Apply

4.0 years

4 - 6 Lacs

gurgaon

On-site

Sr Operations Analyst role will support the infrastructure environment by pro-actively monitoring infrastructure events, effectively respond to and coordinate resolution of issues, and manage change within the VMware, AWS, Server and storage environments. Perform the tasks related to the VMware & AWS daily operations tasks. Familiar with VMware, AWS and Windows/Linux production environments support. Should have experience in development like Ansible, Terraform, cloud formation, PowerShell etc. Perform the New VM provisioning, Decommissioning and windows upgrade etc. Pro-actively monitor the stability and performance of various technologies within area of expertise and drive appropriate corrective action prior to an incident or problem occurring. Actively collaborate with fellow members of the team and contractors/vendors on bridge calls to prevent or resolve incidents/problems in an expeditious manner. Should have experience or knowledge of Windows Patching tools like SCCM, SCOM & SCVMM. Independently identify key issues, patterns and deviations during the analysis. Participate and provide input in the continual refinement of processes, policies and best practices to ensure the highest possible performance and availability of technologies. Create, maintain and update documentation including troubleshooting guides, procedure/support manuals, and communication plans. Continuously develop specialized knowledge and technical subject matter expertise by remaining apprised of Industry trends, the direction of emerging technologies, and their potential value to the business. Perform ITIL operations like change, Incident & problem management within the Service Now. Contribute development of execution of operational reporting including daily health check reports, capacity/performance reports, and incident/problem reports. Data Collection, Tracking & Analysis Use a variety of data collection techniques and systems to collect technology operations performance data. Analyze to draw accurate conclusions regarding performance, trends and issues (current and/or potential). Monitor compliance with defined SLA/OLA’s. Monitor consumption/usage metrics to understand trends to assist in the effective management of vendor partners (as applicable). Perform trend analysis to identify cause of performance and/or usage issues. Continuous Improvement Work with application teams to determine the impact of application changes to the monitors configured for an application and determine if any changes or additions are required. Assist teams in identifying monitoring requirements and implementing the appropriate monitors to achieve the desired results. Use experience, expertise and data analysis to collaborate with manager and team members in the identification of corrective action to increase efficiency, improve performance and meet or exceed targets. Degree in Computer Science, Engineering, or equivalent academic qualification. Ø Mandatory: Should have 4 – 7 years of professional experience in administration, configuration & proficiency in support of in managing VMware vSphere 7.x & 8.x environment (clusters/farms). Ø Mandatory: Should have 4+ years of professional experience in administration, configuration & proficiency in support of in managing Cisco HyperFlex, Cisco UCS & Fabric Interconnects Ø Mandatory: Candidate should have worked in Level 1 (L1) or L2 Server Support Team for minimum 4 years Ø Mandatory Should have adequate experience & skill in manage AWS Cloud Environments & Hybrid Environment. Ø Should be familiar with activities like Patching, Upgrades, Migration, Refresh, etc. Ø Should have experience working with DevOps tools like Terraform, cloud formation, Jenkins, Artifactory, Git/BitBucket etc. Ø Should have hands-on experience in writing Automation in Ansible, PowerCli or PowerShell. Ø Should have experience working with different vendors like VMware, Cisco, Dell & HP. Ø Should have experience with team and project management. Ø Should have worked in the Agile environment and has knowledge of Scrum, sprints etc. Ø Exposure to Microsoft Power BI is a plus. Ø Knowledge to database like SQL, oracle etc. is plus Ø Should be familiar with activities like Patching, Upgrades, Migration, Refresh, etc. Ø Should have ample exposure to IT environments governed by ITIL framework. Change, Incident (RCA) & Problem Management-related activities should be component of candidate’s responsibilities. Ø Prior exposure towards ServiceNow is desirable Ø Strong attention to detail and with ability to focus on quality and efficiency. Ø Ability to communicate and articulate technical information across various organizational levels Ø Highly innovative problem solver with strong analytical and customer service abilities required. Ø High reasoning aptitude and ability to quickly understand complex operating environments. Ø strong thought leadership and motivation with the ability to work independently About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 2 days ago

Apply

5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team The Development Tools team runs Software Development tools for the globally distributed Software Engineering teams at Roku , with the goal of increasing Developer Productivity. As a centralized team, our goal is to unify how Engineers get their jobs done. Solving for challenges brought about by this unification is at the heart of what we do. We level the playing field for engineers of different experience levels by creating development and deployment standards which serve to delight Roku’s Engineers and drive productivity across Engineering! About the Role Are you excited when Developers move fast and efficiently? If so, Roku is looking for a candidate to join the Cloud Technology Infrastructure’s - Development Tooling team. As a member of this team, you will be part of the team responsible for operating and scaling our development tools, as well as adding features and functionality by extending existing tools OR creating new tools that support our charter to improve Developer Productivity. These tools are mission critical to supporting the Internal Developer teams and their goals of continuing Roku’s success in the “Streaming” industry. What you will do As a Sr. Software Engineer in the Developer Tools team, you will automate, monitor, maintain, and support critical CI/CD infrastructure behind services such as Jenkins, Gitlab, Artifactory, and Backstage through your advanced knowledge of Cloud Infrastructure Deployments and use those skills to adapt to match the needs of the organization. You will find patterns in problems and propose solutions, across areas of the SDLC, and develop CI components and reusable workflows to support a “fast prototyping” development/testing cycle. You will also identify key feature gaps, bugs, scalability issues, and other problems with our tooling while working with our internal customers. You will use that feedback to collaborate with the team internally and our stakeholders and partners to implement solutions. Additionally, your ability to demonstrate great communication skills and a support-oriented mentality in working with technical and non-technical audiences will allow for you to thrive within the team. We’re excited if you have 5+ years of experience supporting and maintaining CI/CD tools in a DevOps or Release Engineer role. Solid experience supporting multiple SCM, VCS, CI, Continuous Delivery, Continuous Deployment tools Strong understanding of Version Control management styles: Release Engineering experience is a plus. Advanced coding skills with one or more of the following: Bash, Python, Go. Ability to learn existing systems and code base quickly. Demonstrated skills in scaling systems inside of a Cloud. AWS experience preferred; GCP, Azure. Experience administering and deploying with one or many of the following is highly desired: Gitlab(+Runners), GitHub, Artifactory, Jenkins. Experience deploying applications and maintaining infrastructure in the Cloud with Terraform and Kubernetes. B.S. or M.S. degree in Computer Science, Engineering, or equivalent. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you want Roku to contact you about job roles, that you have read Roku's Applicant Privacy Notice, and understand that Roku will use your information as described in that notice. If you do not wish to receive any communications from Roku regarding this role or similar roles in the future, you may unsubscribe here at any time.

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer at Wells Fargo, your role will involve leading moderately complex initiatives and deliverables within technical domain environments. You will contribute to large scale planning of strategies, design, code, test, debug, and document for projects and programs associated with the technology domain, including upgrades and deployments. Your responsibilities will also include reviewing moderately complex technical challenges that require an in-depth evaluation of technologies and procedures, resolving issues, and leading a team to meet existing client needs or potential new clients needs while leveraging a solid understanding of the function, policies, procedures, or compliance requirements. Additionally, you will collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals, lead projects, act as an escalation point, provide guidance and direction to less experienced staff. Qualifications Required: - 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: - 5+ years of hands-on experience in building applications using Java, Spring framework, and Spring Boot - Strong experience in designing and building microservices/ web services - Front end development experience with ReactJS or Angular JavaScript, NodeJS - Experience working in Capital Markets / Investment banking - Knowledge of messaging services like Kafka, Solace, etc. - Experience working on relational databases like Oracle, MS SQL Server, etc. - Knowledge of caching solutions like Redis, Ignite, Coherence - Strong programming skills - Working experience in a cloud environment like PCF/OCP/Azure/ GCP - Experience with CI/CD technologies such as Jenkins, GitHub, Artifactory, Sonar, etc. - Experience with Agile development methodologies such as SCRUM - Excellent organizational, verbal, and written communication skills Job Expectations: - Location: Bengaluru / Hyderabad - Comfortable working in an Agile software delivery environment and desire to collaborate and work closely with cross-functional teams - Experience of cloud migration of applications and Azure/GCP certification will be an added advantage - Evaluation of latest tools and technology and onboarding them to improve productivity In addition, Wells Fargo values Equal Opportunity and supports building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture. Candidates applying to job openings in Canada are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples, and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. If you require a medical accommodation during the application or interview process, you can visit Disability Inclusion at Wells Fargo. Furthermore, Wells Fargo maintains a drug-free workplace and prohibits third-party recordings unless authorized. Wells Fargo also requires you to directly represent your own experiences during the recruiting and hiring process.,

Posted 3 days ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

We at Innovecture are hiring for a DevSecOps Engineer to expand our team, this will be in Pune/Mumbai. You will work across various Innovecture and client teams and apply your technical expertise to some of the most complex and challenging technology problems. About Innovecture: Founded in 2007 under the leadership of CEO Shreyas Kamat, Innovecture LLC, began as a U.S.-based Information Technology and Management Consulting Company focusing on technology consulting and services. With international development centers located in Salt Lake City, USA, and Pune, India, Innovecture leverages its Global Agile Delivery Model to effectively deliver client projects within budget scope and project deadline. The primary focus of Innovecture is to provide a unique wealth of expertise and experience to the IT and Management Consulting realm by utilizing various technologies across multiple industry domains. Innovecture uses best-in-class design processes and top-quality talent to ensure the highest quality deliverables. With innovation embedded in its consulting and services approach, Innovecture will continue to deliver outstanding results for its Fortune 500 clients and employees. Job Description Architect, design, and implement robust and scalable CI/CD pipelines using Jenkins and ArgoCD for continuous integration, delivery and deployment. Drive the adoption and implementation of GitOps principles using ArgoCD for managing Kubernetes deployments and infrastructure as code. Develop and maintain automation scripts (e.g., Python, Bash, Shell) for various DevOps tasks, including infrastructure provisioning, configuration management, and deployment. Manage and optimize Kubernetes clusters, including deployment, monitoring, troubleshooting, and resource management (pods, services, deployments, etc.). Work closely with development, QA, and operations teams to ensure smooth and efficient software delivery, troubleshoot issues, and enhance system reliability. Ensure CI/CD pipelines and infrastructure adhere to security standards and compliance requirements. Proven experience in designing, implementing, and managing complex CI/CD pipelines and DevOps practices. In-depth knowledge and hands-on experience with Jenkins, including pipeline creation (Declarative Pipelines), plugin management, and job configuration. Strong understanding and practical experience with ArgoCD for GitOps-based continuous delivery to Kubernetes. Experience with IaC tools like Terraform or Ansible. Proficiency in scripting languages such as Python, Bash, or Shell scripting for automation. Familiarity with major cloud platforms (e.g., AWS, Azure) and their relevant services (e.g., Rancher, EKS, Api Gateway, AWS KMS, Config Maps etc.. Experience with monitoring and logging tools (e.g., Fluent bit, Cloud Watch, Prometheus, Grafana, ELK stack). Strong experience with Git and platforms like GitHub, GitLab, or Azure DevOps. Excellent problem-solving, communication, and collaboration skills. Experience in configuring and using SAST and DAST Tools like SonarQube, SNYK, Git Gaurdian, Burp Suite. Experience with Automation testing scripts in pipeline. Artifactory integrations with the JFrog DevOps Platform, including JFrog Xray, for comprehensive security scanning, vulnerability detection, and policy enforcement. Familiarity with Github Actions.

Posted 3 days ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

bengaluru

Work from Office

Location: Bangalore,IND About Applied Materials Applied Materials is the leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. Our expertise in modifying materials at atomic levels and on an industrial scale enables customers to transform possibilities into reality. At Applied Materials, our innovations make technology possible, shaping the future. About the Role Join our dynamic Delivery Pipeline DevOps group at Applied Materials, where we are dedicated to developing the DevOps infrastructure. Our team is responsible for maintaining the reliability, support, and high availability of our operational environment. We enable the Research & Development group to leverage CI/CD tools, infrastructure, and automations to drive innovation and efficiency. Role Overview We are looking for a motivated Lead DevOps Engineer to join our team. In this role, you will be part of a DevOps group responsible for handling on-prem initiatives, providing, planning, executing, and reporting on various infrastructure and code projects. You will play a crucial role in ensuring the smooth operation of our DevOps processes and systems. Roles and Responsibilities Develop and maintain the next generation DevOps infrastructure. Ensure the reliability, support, and high availability of the operational environment. Enable the Research & Development group to use CI/CD tools, infrastructure, and automation. Handle on-prem initiatives, including support, planning, execution, and reporting. Collaborate with cross-functional teams to drive continuous improvement and innovation. Troubleshoot and resolve issues related to DevOps processes and systems. Implement best practices for DevOps and infrastructure management. Technical Requirements 5+ years of experience with Gradle & Jenkins. In-depth knowledge of Gradle build lifecycle, plugins, lazy configuration, lazy properties, and sharing data between projects. Proficiency in Groovy scripting for Gradle build customization. 5+ years of experience with Java, including solid mid-level programming skills and proficiency in Java 21 or later. Experience with Maven publication mechanisms and familiarity with artifact repositories such as Artifactory or Nexus. Hands-on experience with CI/CD systems (e.g., Azure DevOps, Jenkins, GitHub Actions). Basic knowledge of Docker and Kubernetes. Basic proficiency in Bash scripting. Basic proficiency in Python. Additional Technical Skills Familiarity with Networking, Linux Administration, and Windows. Proficiency with virtualized and containerized environments. Exposure to Agile methodologies and respective tool chain. Familiarity with configuration management tools (e.g., Ansible). Familiarity with system monitoring and centralized logging platforms (e.g., Prometheus, Loki). Soft Skills Ability to design and present software architecture. Strong collaboration and communication skills as a team member. Ability to learn new technologies quickly. Self-motivated, adaptable, and able to prioritize in a fast-paced environment. Qualifications At least 10+ years of experience in DevOps domain. Bachelor''s/Master''s Degree of Engineering in CS or IT or equivalent. Proven track record of setting up scalable and secure CI/CD infrastructure from scratch, including integration with version control, automated testing, and deployment workflows. Strong knowledge of CI/CD tools, infrastructure, automations Experience with on-prem and cloud environments. Excellent problem-solving and troubleshooting skills. Why Join Us Be part of a global team driving innovation and efficiency in DevOps. Work on cutting-edge projects and technologies. Collaborate with talented professionals from diverse backgrounds. Opportunity for growth and career development. Impact Impacts a range of customer, operational, project, or service activities within own team and other related teams; works within broad guidelines and policies. Applied Materials is committed to diversity in its workforce, including Equal Employment Opportunity for Minorities, Females, Protected Veterans, and Individuals with Disabilities. Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: Yes, 10% of the Time Relocation Eligible: Yes

Posted 3 days ago

Apply

3.0 - 5.0 years

7 - 11 Lacs

pune

Work from Office

Sr DevOps Engineer Primary skills - Jenkins , CHEF, SaltStack, and Habitat, Ansible, Artifactory, Groovy Scripts, Bitbucket, GIT, Maven, XLR, Apmate, Dockers, Kubernetes Secondary skills- Java/JEE Job Description Extensive experience with various Infrastructure Automation tools including Ansible, CHEF, SaltStack, and Habitat Extensive experience building cookbooks/templates/plans to automate provisioning of physical and virtual hosts, as well as operational run activities. Strong background and understanding of various PaaS and Container Orchestration Systems such as Cloud Foundaryand Kubernetes. Good Experience in deployment automation concepts and tooling including Jenkins, Artifactory, Groovy Scripts, Bitbucket, GIT, Maven, XLR, Apmate, Dev Cloud(s) setup and advanced branching strategie Proven track record of leading and executing on enterprise level infrastructure automation initiatives and demonstrated business results. Mandatory Skills: DevOps. Experience: 3-5 Years.

Posted 3 days ago

Apply

10.0 - 15.0 years

4 - 8 Lacs

bengaluru

Work from Office

Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 9 years of relevant experience. Looking for an Infrastructure Architect with over 10 years of experience in the following skills: Strong RedHat Linux OS and Windows OS administration Upgrade experience. Server Clean ups across the estate for unusedold jars that creates the violations and flagged by Flexera/ Tenable Scan. DB ,JBoss Red hat ,IIS TSS fixes, WWF (Windows Workflow Foundation), TCCM (Transfer Case Control Module), TSS (Trusted Security Services) Working experience with RedHat JWS and JBoss middleware technology Middleware/ DB upgrade like MQ, Mongo, PostgreSQL Working experience with Database, Middleware and programming language upgrade Experience with Firewall, SAN/NAS storage technologies Working knowledge with DevOps methodologies, tools, and automation (pipeline) (Nexus, Artifact, Jenkins, Artifactory) is a valuable addition. Skills: English JBoss Linux Middleware Network Administration Postgre SQL Wintel/Windows Server

Posted 3 days ago

Apply

Exploring Artifactory Jobs in India

The artifactory job market in India is experiencing significant growth as more and more companies are moving towards DevOps practices. Artifactory professionals are in high demand across various industries, including IT, software development, and e-commerce. With the increasing adoption of automation and continuous integration/continuous deployment (CI/CD) pipelines, the need for skilled artifactory professionals is on the rise.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for artifactory professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the artifactory job market, a typical career path may involve starting as a Junior DevOps Engineer, then progressing to a DevOps Engineer, followed by roles such as Senior DevOps Engineer, DevOps Architect, and eventually Tech Lead or DevOps Manager.

Related Skills

In addition to proficiency in artifactory, artifactory professionals are often expected to have knowledge and experience in the following areas: - DevOps tools and practices - CI/CD pipelines - Cloud platforms (e.g., AWS, Azure, GCP) - Containerization technologies (e.g., Docker, Kubernetes)

Interview Questions

  • What is Artifactory and how does it differ from other repository managers? (basic)
  • Can you explain the difference between a local repository and a remote repository in Artifactory? (basic)
  • How do you manage dependencies in Artifactory? (basic)
  • What is the purpose of checksums in Artifactory and how do they work? (basic)
  • How do you ensure security in Artifactory repositories? (medium)
  • Can you explain the difference between Artifactory Pro and Artifactory OSS? (medium)
  • How do you troubleshoot Artifactory performance issues? (medium)
  • What are some best practices for optimizing storage usage in Artifactory? (medium)
  • How do you integrate Artifactory with CI/CD pipelines? (medium)
  • Can you explain the concept of repository replication in Artifactory? (medium)
  • How do you handle artifact versioning in Artifactory? (advanced)
  • What are the key differences between Artifactory and Nexus repository managers? (advanced)
  • How do you configure high availability for Artifactory instances? (advanced)
  • Can you explain the role of Xray in Artifactory and how it enhances security? (advanced)
  • How do you manage access control and permissions in Artifactory? (advanced)
  • What are some common challenges faced when scaling Artifactory for large organizations? (advanced)
  • How do you monitor and analyze usage patterns in Artifactory repositories? (advanced)

Closing Remark

As you explore opportunities in the artifactory job market in India, make sure to brush up on your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right skills and mindset, you can build a successful career in this growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies