Jobs
Interviews

1120 Artifactory Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 5 to 9 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: Azure (Virtual Machines, Azure Active Directory, Virtual Network, Blob Storage, Functions, Database, Azure Service Bus, Azure Monitor) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic

Posted 1 month ago

Apply

150.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who we are? Sun Life is a leading financial services company with history of 150+ years that helps our clients achieve lifetime financial security and live healthier lives. We serve millions in Canada, the U.S., Asia, the U.K., and other parts of the world. We have a network of Sun Life advisors, third-party partners, and other distributors. Through them, we’re helping set our clients free to live their lives their way, from now through retirement. We’re working hard to support their wellness and health management goals, too. That way, they can enjoy what matters most to them. And that’s anything from running a marathon to helping their grandchildren learn to ride a bike. To do this, we offer a broad range of protection and wealth products and services to individuals, businesses, and institutions, including: Insurance. Life, health, wellness, disability, critical illness, stop-loss, and long-term care insurance. Investments. Mutual funds, segregated funds, annuities, and guaranteed investment products Advice. Financial planning and retirement planning services Asset management. Pooled funds, institutional portfolios, and pension funds With innovative technology, a strong distribution network and long-standing relationships with some of the world’s largest employers, we are today providing financial security to millions of people globally. Sun Life is a leading financial services company that helps our clients achieve lifetime financial security and live healthier lives, with strong insurance, asset management, investments, and financial advice portfolios. At Sun Life, our asset management business draws on the talent and experience of professionals from around the globe. Sun Life Global Solutions (SLGS) Established in the Philippines in 1991 and in India in 2006, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new-age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Our Client Impact strategy is motivated by the need to create an inclusive culture, empowered by highly engaged people. We are entering a new world that focuses on doing purpose driven work. The kind that fills your day with excitement and determination, because when you love what you do, it never feels like work. We want to create an environment where you feel empowered to act and are surrounded by people who challenge you, support you and inspire you to become the best version of yourself. As an employer, we not only want to attract top talent, but we want you to have the best Sun Life Experience. We strive to Shine Together, Make Life Brighter & Shape the Future! Role & responsibilities Effectively utilize multiple software development and deployment methodologies e.g., TDD, DDD, along with secure coding practices, development best practices and Agile/DevOps/DevSecOps principles I identify continuous improvement opportunities in the existing solutions and drive implementation. I act as a code reviewer and apply best practices for optimal design solutions. I act as a first point of contact and resolve technical issues/impediments for SDE I & II. Basic engineering, system administration / provisioning, software development (programming), support, testing and system infra provisioning foundation. Basic understanding and ability to utilize multiple software development methodologies e.g., Test Driven Development (TDD), Domain Driven Development (DDD), along with secure coding practices, development best practices and Agile/ DevOps principles Preferred candidate profile A Bachelors or masters degree in Computer Science or related field. A minimum of 5 to 8 years of working experience with DevOps Consulting, assessment, implementing CI/CD pipeline and handling end to end DevOps activities in a project. Good experience with CI servers like Jenkins, Artifactory, SonarQube and others - and their application to create CI/CD pipelines Good knowledge around scripting languages like Groovy, Bash and Powershell. Proficiency in writing Ansible/Chef playbooks and Ansible Tower . Should have worked on one of the private clouds like VMware or OpenStack . Hands-on experience and expertise on working with containerization tools like Docker . Hands-on experience and expertise in container orchestration tools like Kubernetes and its eco-system like Rancher , OpenShift , etc. Good knowledge with Terraform . Hands on with Automated Environment Provisioning ( AEP ) Good knowledge of Build Tools like Maven, ANT, Gradle Good knowledge about the WebSphere Liberty servers. Extensive knowledge about microservices and their pipelines. Proficiency in dealing with Java based applications with Maven/Gradle. Good knowledge about cloud platforms mainly AWS like EC2, VPC, Route53, load balancer etc. Good to have knowledge about node/react based CI/CD pipeline. Experience in designing automated CI/CD pipelines for new software projects, right from project kickoff to production deployment and maintenance. Experience and deep understanding of the DevSecOps ecosystem. Experience in Burp suite & TFS Experience with Agile, Jira & Service Now

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Marsh McLennan is seeking candidates for the following position based in the Mumbai/Pune office. Associate Director / Director– Applications Development (Grade F/G) We will count on you to: Lead Applications Developer owns the design of the application and is responsible for the implementation with the development team and ensure the design is properly incorporated into the deliverables Act as technical design authority collaborating with the solution architect to implement the solution Provide application design feedback to the project team on an ongoing basis Review detailed design and code deliverables and ensure quality Assist the Enterprise architects in defining, implementing and overseeing technical standards, policies and tools Provide technical oversight to development teams. Work with technical staff to understand problems with software and develop specifications to resolve them. Responsible for effective application development including integration with other Marsh systems. Responsible for operational effectiveness of the environment; adhere to Marsh Process Framework. Proactively contribute to our system architecture and stack design, toolset, agile and DevOps approach Collaborate closely with Agile Dev team members, Product Owners and business users to deliver business value Automate build, test (unit, functional, security vulnerability and performance) and deployment of the code part of the CI/CD pipeline by working closely with Platform Engineer Mentor developers on both technical topics and with Agile/DevOps adoption Estimate Development efforts for new initiatives; help define and build Development teams by working with stakeholders What you need to have: Min. 7 years of Business Analysis experience. BE/ BTECH/ MCA/ M TECH Active team player with the ability to work across geographies. Be open for new challenges and willing to undertake additional responsibilities. What makes you stand out: A degree in Computer Science or related field or have relevant experience 12+ years of development experience in applications/product development using Java and/or MEAN Stack Have experience or working knowledge of: JavaScript, Java (Spring, Hibernate, Maven, Gradle, REST API’s) JSON, XML Web platforms such as NODE, React, MERN Stack Experience in Micro services and API development using API Gateway (Apigee/ Hybrid) Agile – TDD (Junit, Mockito, Jasmine, Karma) BDD (Cucumber) Pair Programming, Scrum/Kanban DevOps – BitBucket/GitHub, Jira, Confluence, CI/CD pipelines (Jenkins), Selenium, Artifactory, Docker/Kubernetes, Datadog/Splunk, JMeter, Whitehat Databases: MongoDB, PostgreSQL, Oracle Cloud – AWS/AZURE and Private Cloud OS – Linux, VM Architectures Self-starter; Insurance industry experience not required but desirable Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Position: DevSecOps Engineer Job Type: -Permanent Location: Greater Noida Experience: -8 to 12+ Years Key Responsibilities: Participate across the full software development lifecycle—from requirements gathering to production deployment, triaging, and proactive monitoring Define and enforce standards for continuous and seamless backend and frontend build and deployment Design and implement reusable CI/CD pipelines and infrastructure frameworks Develop specifications, write code, and own unit, performance, and functional testing Lead debugging, rollout, and production launch activities Implement robust monitoring and incident response solutions for cloud infrastructure Conduct regular security audits and vulnerability assessment. Technical Expertise: Strong hands-on experience with Azure Cloud Services (certification preferred) Proficiency in DevOps tools: Ansible, Jenkins, Artifactory, Jira, Git/version control systems Deep understanding of infrastructure provisioning and optimization Conduct regular security audits and vulnerability assessments Solid grasp of networking concepts and configurations

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title: Senior Data Engineer Corporate Title: AVP Location: Pune, India Role Description Technology Management is responsible for improving the technological aspects of operations to reduce infrastructure costs, improve functional performance and help deliver Divisional business goals. To achieve this, organization needs to be engineering focused. Looking for technologists who demonstrate a passion to build the right thing in the right way. We are looking for an experienced SQL developer to help build our data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing complex data models, stored procedures, and tuning queries, that push the boundaries of what is possible and exists within the bank today. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy: Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Part of a global team, forging strong relationships with geographically diverse teams / colleagues and businesses to formulate and execute technology strategy Production of code-based assets within the context of agile delivery (helping define and meet epics, stories, acceptance criteria) Responsible for the design, development and QA of those assets and outputs Perform review of component integration testing, unit testing and code review Write high performance, highly resilient queries in Oracle PL/SQL, and Microsoft SQL Server T--SQL Experience working with agile/continuous integration/test technologies such as git/stash, Jenkins, Artifactory Work in a fast-paced, high-energy team environment Developing scalable applications using ETL technology like Stream Sets, Pentaho, Informatica etc. Design and develop dashboards for business and reporting using the preferred BI tools eg Power BI or QlikView Thorough understanding of relational databases and knowledge of different data models Well versed with SQL and able to understand and debug database objects like Stored Procedures, Functions etc. Managing a Data Modelling tool like Power Designer, MySQL Workbench etc. Agile (scrum) based delivery practices, test driven development, test automation, continuous delivery Passion for learning new technologies Your Skills And Experience Education / Certification: Bachelor’s degree from an accredited college or university with a concentration in Science, Engineering or an IT-related discipline (or equivalent) Fluent English (written/verbal) Excellent communication and influencing skills Ability to work in fast paced environment Passion about sharing knowledge and best practice Ability to work in virtual teams and in matrixed organizations How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title- Java Microservices Engineer Location- Pune, India Role Description Our agile development team is looking for an experienced Java-based Senior developer to help build Solution Design , Steer Team Technically , Work with Architects and build data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing Server-Side, cutting edge applications, that push the boundaries of what is possible and exists within the bank today. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Proposing E2E Technical Solutions for complex Business Problems. Creating Solution Design Documents Liasing with Solution Architects , Data Architects. Groom and Lead Junior Developers Production of code-based assets within the context of agile delivery (helping define and meet epics, stories, acceptance criteria) Responsible for the design, development and QA of those assets and outputs Ensure compliance to coding guidelines and standards. Perform review of component integration testing, unit testing and code review Write high performance, highly resilient micro-service java based middle tier development (use of Spring-Cloud framework) Experience with Server-Side development, data processing, Networks and Protocols. Experience working with agile/continuous integration/test technologies such as git/stash, Jenkins, Artifactory, Appium, Selenium, SonarQube Experience with Data Modelling and SQL. Ability to work in a fast-paced, high-energy team environment. Experience with SOA (SOAP / Rest / OData) Experience of developing scalable application using Kafka Good understanding of relational databases and knowledge of different data models. Well versed with SQL and able to understand and debug database objects like Stored Procedures, Functions etc. API based services (Java, restful services, API management, micro services, using open-source libraries, frameworks and platforms), NFR engineering practices in agile delivery Agile (scrum) based delivery practices, test driven development, test automation, continuous delivery. Proven high performance, highly resilient micro-service and java based middle tier development experience (use of Spring-Cloud framework preferable) Passion for learning new technologies. Your Skills And Experience Excellent communication and influencing skills. Open minded. Ability to work in fast paced environment. Passion about sharing knowledge and best practice. Ability to work in virtual teams and in matrixed organisations. Proven project management and people management skills. Fluent English (written/verbal). Education / Certification Bachelor's degree from an accredited college or university with a concentration in Science, Engineering or an IT-related discipline (or equivalent). How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 27 Lacs

Indore, Pune

Work from Office

Greetings of the Day !!! We have job opening for DevOps Lead ( SRE) For one of our client . If your profile matching the requirement , please share update resume . Lead DevOps Engineer Note :Only immediate joiner Looking for resource who are hands on experience rather doing any kind of lead work Should have 7 9 years of hands on experience on technology mentioned in the JD. (Specifically in Google Cloud and GitHub Actions) Should be flexible in working hours, as it is American project so client may ask to work till 10 11 PM (IST) Detailed JD- Senior DevOps Engineer Location: Indore, Pune work from office. Job Summary: We are seeking an experienced and enthusiastic Senior DevOps Engineer with 7+ years of dedicated experience to join our growing team. In this pivotal role, you will be instrumental in designing, implementing, and maintaining our continuous integration, continuous delivery (CI/CD) pipelines, and infrastructure automation. You will champion DevOps best practices, optimize our cloud-native environments, and ensure the reliability, scalability, and security of our systems. This role demands deep technical expertise, an initiative-taking mindset, and a strong commitment to operational excellence. Key Responsibilities: CI/CD Pipeline Management: Design, build, and maintain robust and automated CI/CD pipelines using GitHub Actions to ensure efficient and reliable software delivery from code commit to production deployment. Infrastructure Automation: Develop and manage infrastructure as code (IaC) using Shell scripting and GCloud CLI to provision, configure, and manage resources within Google Cloud Platform (GCP) . Deployment Orchestration: Implement and optimize deployment strategies, leveraging GitHub for version control of deployment scripts and configurations, ensuring repeatable and consistent releases. Containerization & Orchestration: Work extensively with Docker for containerizing applications, including building, optimizing, and managing Docker images. Artifact Management: Administer and optimize artifact repositories, specifically Artifactory in GCP , to manage dependencies and build artifacts efficiently. System Reliability & Performance: Monitor, troubleshoot, and optimize the performance, scalability, and reliability of our cloud infrastructure and applications. Collaboration & Documentation: Work closely with development, QA, and operations teams. Utilize Jira for task tracking and Confluence for comprehensive documentation of systems, processes, and best practices. Security & Compliance: Implement and enforce security best practices within the CI/CD pipelines and cloud infrastructure, ensuring compliance with relevant standards. Mentorship & Leadership: Provide technical guidance and mentorship to junior engineers, fostering a culture of learning and continuous improvement within the team. Incident Response: Participate in on-call rotations and provide rapid response to production incidents, perform root cause analysis, and implement preventative measures. Required Skills & Experience (Mandatory - 7+ Years): Proven experience (7+ years) in a DevOps, Site Reliability Engineering (SRE), or similar role. Expert-level proficiency with Git and GitHub , including advanced branching strategies, pull requests, and code reviews. Experience designing and implementing CI/CD pipelines using GitHub Actions. Deep expertise in Google Cloud Platform (GCP) , including compute, networking, storage, and identity services. Advanced proficiency in Shell scripting for automation, system administration, and deployment tasks. Strong firsthand experience with Docker for containerization, image optimization, and container lifecycle management. Solid understanding and practical experience with Artifactory (or similar artifact management tools) in a cloud environment. Expertise in using GCloud CLI for automating GCP resource management and deployments. Demonstrable experience with Continuous Integration (CI) principles and practices. Proficiency with Jira for agile project management and Confluence for knowledge sharing. Strong understanding of networking concepts, security best practices, and system monitoring. Excellent critical thinking skills and an initiative-taking approach to identifying and resolving issues. Nice-to-Have Skills: Experience with Kubernetes (GKE) for container orchestration. Familiarity with other Infrastructure as Code (IaC) tools like Terraform . Experience with monitoring and logging tools such as Prometheus, Grafana, or GCP's Cloud Monitoring/Logging. Proficiency in other scripting or programming languages (e.g., Python, Go) for automation and tool development. Experience with database management in a cloud environment (e.g., Cloud SQL, Firestore). Knowledge of DevSecOps principles and tools for integrating security into the CI/CD pipeline. GCP Professional Cloud DevOps Engineer or other relevant GCP certifications. Experience with large-scale distributed systems and microservices architectures.

Posted 1 month ago

Apply

6.0 - 9.0 years

5 - 15 Lacs

Chennai

Hybrid

Key Responsibilities 1. To be responsible for providing technical guidance to a team of developers, enhancing their technical capabilities and increasing productivity. 2. To conduct comprehensive code reviews, establish and oversee quality assurance processes, performance optimization , implementation of best practices and coding standards to ensure succeful delivery of complex projects. 3. To ensure process compliance in the assigned module| and participate in technical discussions/review as a technical consultant for feasibility study (technical alternatives, best packages, supporting architecture best practices, technical risks, breakdown into components, estimations). 4. To collaborate with stakeholders to define project scope, objectives, deliverables and accordingly prepare and submit status reports for minimizing exposure & closure of escalations.

Posted 1 month ago

Apply

9.0 - 11.0 years

37 - 40 Lacs

Ahmedabad, Bengaluru, Mumbai (All Areas)

Work from Office

Dear Candidate, We are seeking a DevOps Engineer to streamline our development and deployment processes. Ideal for professionals passionate about automation and infrastructure. Key Responsibilities: Implement and manage CI/CD pipelines Monitor system performance and troubleshoot issues Automate infrastructure provisioning and configuration Ensure system security and compliance Required Skills & Qualifications: Experience with tools like Jenkins, Docker, and Kubernetes Proficiency in scripting languages like Bash or Python Familiarity with cloud platforms (AWS, Azure, or GCP) Bonus: Knowledge of Infrastructure as Code (Terraform, Ansible) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 month ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

About the role In this pivotal role, you will be responsible for providing technical leadership, guidance, and hands-on coding expertise to drive the success of our product engineering teams. As a Senior Engineering Technical Lead, you will be accountable for shaping the technical design and direction of our products, mentoring team members, and ensuring the successful delivery of high-quality software solutions.Key to this role is taking accountability for non-functional requirements for the product under development including scalability, security, performance and usability requirements. Distinct from a solution or enterprise architecture role, crucially this role is accountable for delivering working code as the starting point for production grade code, ensuring the technologies in use are appropriate for the skills of the developers - working with lines manages to provide clear development and training plans for engineers to effectively engage with those technologies - and being the ‘single throat to choke’ for adherence to coding and engineering standards. Skills and experience required:- Technical Expertise: Strong hands on experience in Java , Spring core technologies like Sprint Boot, Spring Security, Hibernate, REST templates, microservices Experience with UI Technologies like HTML,CSS, Javascript and frameworks like React Native, Angular, React JS Experience working with data streaming tools like Kafka Experience in designing and implementing integration solutions Understanding how to manage and integrate data across different systems. CI/CD process preferably with Azure Devops and TDD Experience working with a SQL RDBMS Experience building microservices and micro-frontends. Experience in Release and Artifactory Management lifecycle Experience using common package management and build tooling, such as Yarn, webpack and Gradle. Experience in writing unit test cases using Junit or Mockito Experience developing cross-platform solutions for native and web platforms. Experience using Docker and Kubernetes or similar containerization tools. Experience in implementing Performance and Security improvements Strong Knowledge on Design principles • Competency using Azure DevOps (ADO). Proficiency in modern software development practices and methodologies. • Leadership: Proven experience leading and mentoring software development teams. Ability to inspire, guide, and support team members to achieve product development goals. • Architectural Design: Demonstrated expertise in designing and implementing scalable and maintainable software architectures. Experience with microservices architecture is a plus. • Problem Solving: Strong analytical and problem-solving skills with the ability to make sound technical decisions. • Communication: Excellent communication skills, both verbal and written, with the ability to effectively convey complex technical concepts to both technical and non-technical stakeholders. • Collaboration: A collaborative mindset with the ability to work effectively in cross-functional teams. • Agile Methodologies: Experience working in an Agile/Scrum development environment. • Continuous Learning: A commitment to continuous learning and staying updated on industry trends and technologies

Posted 1 month ago

Apply

0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Our Context To support its rapid growth and continuously improve the quality of its software and processes, NeoXam is seeking a DevOps Engineer (M/F) to join its R&D team. As part of a cross-functional R&D team of around ten people, you will be responsible for implementing software production and quality assurance methods and tools. Within the R&D department, the DevOps Build team provides development teams with tools and methods on a 100% virtualized infrastructure to build, test, deliver, and manage the lifecycle of NeoXam products in a standardized, automated way. This automated system, known as the Software Factory, uses tools such as BitBucket, Bamboo, Sonar, VeraCode, and Artifactory. RESPONSIBILITIES Your main responsibilities will include: Setting up a replica of our Software Factory to test tool updates. Managing version upgrades for the Software Factory tools. Proposing improvements to the development, acceptance, and delivery processes, given your holistic view of the production chain. Training R&D teams on new tools and processes you introduce. Assisting in setting up deployment tools for our solutions on the Cloud (AWS, Azure, etc.). PROFILE Proficiency in configuration management languages and tools such as Ansible, SaltStack, and Terraform. Prior experience in setting up and managing a continuous deployment pipeline (e.g., Atlassian Bamboo, AWS CodePipeline, GitLab CI). Knowledgeable about cloud computing principles and experienced with key AWS services. Initial experience with Docker or Kubernetes Proficient in English. Bonus: Proficiency in Python Familiarity with CI/CD languages and tools like SVN/Git, Ant, Maven, BitBucket, Bamboo, Jira, Sonar, Fisheye, Crucible, or similar alternatives (TFS, Hudson/Jenkins, Gradle). Familiar with Agile methodologies, especially Scrum. Experience with Atlassian products (Jira, BitBucket, Bamboo, etc.). At NeoXam, we value curiosity, commitment, and autonomy. We seek individuals who are solution-oriented and see continuous learning as essential to their development. The role requires rigor, perseverance, and the ability to communicate effectively and promote best practices. Main responsibilities of the Cloud DevOps Engineer: Streamline, standardize, and improve the cloud offering Work closely with internal departments (software R&D teams, Cloud R&D teams, Products, As a Service team) to establish standards and best practices for design and development. Evaluate and recommend tools and technologies to improve development and deployment processes: managed services, containerization (Docker), orchestration (Kubernetes), and consideration of scalability/flexibility mechanisms (EKS, AKS, Lambda, EventBridge, etc.). Collaborate with technical teams to ensure compliance with security standards Support pre-sales teams with their demos and POCs Participate in and support project teams (consulting, clients) on the Cloud. Ensure technological monitoring and experiment with new services offered by Cloud providers (managed services, Machine Learning, AI, etc.)

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Chennai

Work from Office

Shared Data Ecosystem (SDE) is an ITG-FRESH department hosting various applications relating to the filire unique program in charge of collecting Accounting and Risk data from local entities in an unique stream. The aim is to share the data in a Datawarehouse named SRS (Shared Reporting Space) to provide regulatory reporting. The SRS Datawarehouse application meets the regulatory requirements of the Reporting streams for financial accounting, credit risk and liquidity risk. The Using Shared Data domain works on multiple projects (IFRS9, FINREP, Bale 4, ESG, Loan Tape Standard,.) ranging from the construction of reporting solutions to access to different data for users. It forms the basis of the Filire Unique and is therefore at the heart of the IT environment of the Finance & Strategy and Risk Functions. The domain consists of about 35 people, internal staff or external assistants, with both functional and technical expertise. The candidate will join ITG-FRESH-SDE and participate actively to new controls and new features implementation and provide a level 2 support in case of incident. The activities will mainly consist in analyzing, documenting and developing using BNPP standards technologies (around Teradata & shell ksh). The candidature will be fully integrated in the project team located in Paris ans Lisbon and involved in roadmap construction and project instruction on technical phasis. The position will require high reactivity and reporting skills to follow the IT activities. The candidate will also use IT tools in the Devops toolchain and guarantee the high quality of developments and compliance to IT Standards. The position will lead to develop knowledges on various financial process around control activities as well as BNPP information system within a motivating environment. As a member of the team, you will contribute to an ambitious programme whose objective is to rely on a common, shared and unified sourcing of Financial and Risk data, the Filire Unique to cover existing and new reporting processes, increase agility within the scope and reduce reconciliations. Responsibilities Analyze and interpret requirement specifications received from analyst Design and develop IT solutions based on the specifications received Liaise with BA to ensure correct understanding and implementation of specifications Propose technical solutions adapted to the business needs (Contribute to technical requirements writings) Work closely in a one team approach with all stakeholders, jointly providing high quality deliverables Participate in the testing phases (system, user acceptance, regression) as required while coordinating with BA, and QA teams Provide support to operations from a technical perspective Implementation of best practices and coding standards Implement Devops tools ensuring the high-quality standard Contributing Responsibilities Contribute to overall FRESH and ISPL Vision goals as directed by Team and Department Management Technical & Behavioral Competencies Technical : - Technical knowledge proven in practice of SQL, Script Shell - Technical knowledge linked to cloud IBM Essential : GITLab CI or Jenkins Artifactory Teradata ( Vantage Certified Developer ) Shell unix - Knowledge in DevOps toolchain processing - Practice in Quality approach (ex : Test strategy with ALM QC, Quality of Code) Additional skills o Database request optimization Skills & Behavioral: - Rigorous, serious, and disciplined - Excellent analytical and problem-solving skills - Excellent communication, motivational, and interpersonal skills - Ability to work as part of a team inside a remote mode - Capability of synthesis - Good documentation skills - Good communication and presentation skills - Previous experience with Finance and Banking would be advantageous to the role Specific Qualifications (if required) - Teradata certification ( Vantage Certified Developer ) Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Attention to detail / rigor Client focused Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Ability to develop and adapt a process Ability to manage a project Ability to understand, explain and support change Choose an item. Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 5 years AGILE SCRUM

Posted 1 month ago

Apply

12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Experience: 12-19 Years Work Location: Kolkata(1st Preference)/ Mumbai / Pune / Chennai / Hyderabad / Bangalore / Delhi / Noida/ Coimbatore Job Description: The AMS Service Delivery Manager will have a deep understanding of SDLC, Distributed and Open Source technologies, Maintenance & Support project features coupled with strong leadership and project management capabilities. This role involves overseeing end-to-end service delivery, ensuring high-quality standards, and maintaining customer satisfaction. Responsibilities: • Oversee the delivery of multiple projects, ensuring they are completed on time, within budget, and to the highest quality standards. • Develop and implement service delivery strategies to enhance efficiency and customer satisfaction. • Ensure compliance with service level agreements (SLAs) and manage service performance metrics. • Provide technical leadership and guidance to the development team, ensuring best practices in coding, architecture, and design • Collaborate with stakeholders to define project scope, objectives, and deliverables. • Monitor and report on service delivery performance, identifying areas for improvement. • Adhere to Agile ways of working and make sure of aligning the team in the same mindset. • Guiding and mentoring team on the SRE guidelines and Continuous improvements. • Encourage team on continuous learning (ensuring cross-skilling and upskilling) and AI intervention as a culture. • Ensure customer feedback is collected, analyzed, and acted upon to improve service quality Mandatory Skills: • Should have Maintenance & Support project experience and good knowledge on SDLC phases on any one of the below Distributed/Open source technologies o .Net Fullstack (+ React or Angular + PL-SQL) OR o Java Fullstack (+ React or Angular + PL-SQL) OR o Mainframe (COBOL, DB2, AS-400 etc.) • Should possess basic knowledge of DevOps (CICD, SAST/DAST, branching strategy, Artifactory and packages, YAML etc.) • End to end Incident Management • Review Shift and Roster plans • Should have good working experience in support & maintenance projects • Project management - project planning, timelines, resources, budgets, risks, mitigation plans, escalation management, Change Request creation, Transition Planning and Management etc. • Excellent Communication and convincing power. • Excellent team collaboration Good to have skills: • Should possess SRE ideas like Observability, Resiliency, SLA-SLI-SLO, MTTx • Knowledge of Observability tools (Splunk/AppDynamics/Dynatrace/Prometheus/Grafana/ELK stack) • Basic Knowledge on Chaos Engineering, Self healing, auto scaling • Basic Knowledge on one of the clouds (Azure OR AWS OR GCP) • Building use cases for automation and AI/Agentic • Knowledge on any one scripting language (Python/Bash/Shell)

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

As a DevOps Engineer, a self-motivated individual that possesses: 1. A minimum of Bachelors in CS, CE, EE with relevant work experience 2. Ability to deploy, maintain, support CI/CD pipelines across multiple environments. 3. Prior experience on DevOps automation tools and processes. 4. Quickly able to adapt to the environment and deliver timely with quality. 5. Self-sufficient in software development concepts and methods, coding, and debugging. 6. Critically assesses product requirements in assigned area of responsibility with respect to feasibility and schedule. 7. Enjoys working with developers to educate and provide guidance in helping for their productivity. 8. Demonstrates flexible adaptability in working with maturing generation-dependent software development and testing methods. Technical Skills: 1. Gradle, groovy and Python programming hands-on 2. Understanding of Buildscripts like Cmake, Makefiles, Gradle, Maven etc… 3. Advanced working on Linux platform 4. Experience using tools like Jenkins, Circle CI, JFrog Artifactory and SCM tools like Github/Perforce 5. Added Advantage (but along with above must have skills) 1. containerization technologies experience such as Docker and Kubernetes 2. Programming experience on C++ and software componentization

Posted 1 month ago

Apply

7.0 - 10.0 years

0 Lacs

Karnataka, India

On-site

Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 month ago

Apply

8.0 years

4 - 8 Lacs

Bengaluru

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Platform Engineer PCCS Location: Manyata Tech Park, Bangalore Business & Team: CTO-Cloud Integration Impact &contribution: The cloud movement at CommBank is going strong and continues to grow. We are looking for out of the box thinkers who want to use technology to work on real-world problems that have the potential to change the lives of our 17 million+ customers. The successful applicant will join a team tasked to build and operate 50+ Kubernetes clusters for the bank. You will be expected to continuously improve the platform through DevSecOps model. The team has very technical individuals who loves AWS and Kubernetes. Roles & responsibilities: Ensure the platform is running 24/7 and respond to incidents as needed. Design and build all aspects of an enterprise platform, e.g. tooling, CI/CD, Security, Observability. Coach other engineers to uplift the engineering quality and velocity. Share engineering knowledge through presentation, blogs, videos with the broader engineering community. Collaborate with Product Owner and team to create relevant engineering roadmaps. Essential skills: Looking for 8+ Years of Experience. Highly proficient with Calico for Kubernetes Possesses technical hands-on experience in AWS. Have experience in AWS container/serverless solution design and architecture, more specifically, Web application with DB. Have experience with various scripting/programming languages such as Python, Typescript, Nodejs Highly proficient with versioning systems and CI/CD tools like; GitHub Github Action, Artifactory, ArgoCD. Highly proficient with Infrastructure as Code tools like; AWS CloudFormation, AWS CDK Have a very good understanding of API development and integrations, e.g. using AWS SDK, GraphQL, Eventing with Kafka. Knowledge of frontend technologies such as HTML, CSS, and ReactJS AWS certifications (desirable) Have experience with ServiceNow API and/or Tableau API (desirable) Familiar with Data and ETL (Extract, Transform, Load) processes and data pipeline management. Educational Qualifications: Bachelor’s degree or master’s degree in engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 17/07/2025

Posted 1 month ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description Senior Manager, Batch Integration Software Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with a passion for using data, analytics, and insights to drive decision-making, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Our Data and Analytics team plays a pivotal role in transforming data into actionable insights that drive strategic decisions and enhance business performance. We are dedicated to harnessing the power of data and integration capabilities to ensure seamless data flow, providing a deeper understanding of our operations, supporting innovation, and fostering a data-driven culture across the organization. What Will You Do In This Role Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What Should You Have Bachelor's degree in information technology, Computer Science or any Technology stream. 8+ years of working experience with enterprise data integration technologies – Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) and Power center 5+ years of integration experience utilizing REST and custom API integration 8+ Years of working experience in relational database technologies and cloud data stores from AWS & Azure 2+ years of work experience utilizing AWS cloud well architecture framework, deployment & integration, and data engineering. Preferred experience with CI/CD processes and related tools, including - Terraform, GitHub Actions, Artifactory, etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency. Extensive experience in the design of reusable integration patterns using cloud-native technologies Extensive experience in process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills Job Posting End Date 05/3/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R341169

Posted 1 month ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description Senior Manager, Batch Integration Software Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with a passion for using data, analytics, and insights to drive decision-making, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Our Data and Analytics team plays a pivotal role in transforming data into actionable insights that drive strategic decisions and enhance business performance. We are dedicated to harnessing the power of data and integration capabilities to ensure seamless data flow, providing a deeper understanding of our operations, supporting innovation, and fostering a data-driven culture across the organization. What Will You Do In This Role Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What Should You Have Bachelor's degree in information technology, Computer Science or any Technology stream. 8+ years of working experience with enterprise data integration technologies – Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) and Power center 5+ years of integration experience utilizing REST and custom API integration 8+ Years of working experience in relational database technologies and cloud data stores from AWS & Azure 2+ years of work experience utilizing AWS cloud well architecture framework, deployment & integration, and data engineering. Preferred experience with CI/CD processes and related tools, including - Terraform, GitHub Actions, Artifactory, etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency. Extensive experience in the design of reusable integration patterns using cloud-native technologies Extensive experience in process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills Job Posting End Date 05/3/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R341168

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 month ago

Apply

5.0 years

0 - 0 Lacs

Hyderābād

On-site

Our mission, your future As a global community of trusted advisors, we create cutting-edge technological solutions to overcome today’s challenges and anticipate tomorrow’s needs. It all starts with the collaboration of a diverse team of passionate intrapreneurs, each driven to make a difference. Together, we challenge the status quo and push each other to new heights. Ready to make a significant impact on mission-critical projects and shape the future through digital transformation and strategic consulting? Take your ambitions to the next level and discover your next exciting challenge with us! Your role, your impact As a Lead Azure DevOps Engineer, you will bring your expertise in CI/CD pipelines and infrastructure-as-code to support and enhance Azure-based environments. You will play a key role in building, maintaining, and optimizing deployment pipelines, automation frameworks, and cloud infrastructure, ensuring the reliability, scalability, and security of our enterprise systems. Your day-to-day Design, build, and maintain automated CI/CD pipelines using Azure DevOps and related tools; Deploy and manage infrastructure using ARM templates, Bicep, and Azure DevOps Pipelines; Leverage version control systems such as GitHub for collaboration and change tracking; Maintain and support release pipelines using tools such as VSTS, Artifactory, GitLab, and Maven; Develop and maintain PowerShell scripts and other automation tools to streamline deployments; Provision and manage Azure services, including virtual machines, networking, and PaaS components; Implement and support infrastructure security measures and compliance standards; Follow change management procedures using tools such as ServiceNow; Troubleshoot deployment and environment-related issues, offering efficient solutions; Collaborate with development and QA teams to ensure DevOps best practices are consistently applied. #LI-Hybrid Keys to your success Degree in computer science, engineering, or a related field; Minimum 5 years of relevant professional experience in DevOps, with a focus on Azure; Solid hands-on experience with CI/CD pipelines and release automation; Experience with enterprise change management and ticketing systems; Proven ability to deploy Azure resources through code (ARM/Bicep); Strong working knowledge of Azure DevOps, GitHub, ARM, and Bicep; Ability to evaluate and contribute to infrastructure architecture and deployment designs; Self-driven and ability to produce quality results independently. Extra edge Hands-on experience with PowerShell scripting; Familiarity with security best practices in Azure environments; Exposure to containerization or infrastructure monitoring tools. Language skills English: Advanced Our authenticity is our strength The diversity of our backgrounds, experiences, thoughts and insights is our competitive advantage. We foster a collaborative environment rooted in our core values of respect, well-being, passion, trust, integrity and creativity. For us, diversity, equity and inclusion aren’t just buzzwords; they’re essential drivers of innovation and excellence, and powerful catalysts for inspiration and evolutionary ideas. The empowerment of our people is fundamental to being the trusted advisor to our clients. Join us in embracing our authenticity and in leveraging our unique perspectives to collectively build the future we all envision. An inclusive path to success Fostering an environment where you can thrive starts with ensuring an accessible recruitment process. If you require any accommodations, we welcome you to contact us. For more information, please visit our accessibility page at https://www.alithya.com/en/accessibility.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary BizOps Engineer II "Overview The BizOps team is looking for a Site Reliability Engineer who can help us solve problems, build our CI/CD pipeline and lead Mastercard in DevOps automation and best practices. Are you a born problem solver who loves to figure out how something works? Are you a CI/CD geek who loves all things automation? Do you have a low tolerance for manual work and look to automate everything you can? Business Operations is leading the DevOps transformation at Mastercard through our tooling and by being an advocate for change & standards throughout the development, quality, release, and product organizations. We need team members with an appetite for change and pushing the boundaries of what can be done with automation. Experience in working across development, operations, and product teams to prioritize needs and to build relationships is a must. Role The role of business operations is to be the production readiness steward for the platform. This is accomplished by closely partnering with developers to design, build, implement, and support technology services. A business operations engineer will ensure operational criteria like system availability, capacity, performance, monitoring, self-healing, and deployment automation are implemented throughout the delivery process. Business Operations plays a key role in leading the DevOps transformation at Mastercard through our tooling and by being an advocate for change and standards throughout the development, quality, release, and product organizations. We accomplish this transformation through supporting daily operations with a hyper focus on triage and then root cause by understanding the business impact of our products. The goal of every biz ops team is to shift left to be more proactive and upfront in the development process, and to proactively manage production and change activities to maximize customer experience, and increase the overall value of supported applications. Biz Ops teams also focus on risk management by tying all our activities together with an overarching responsibility for compliance and risk mitigation across all our environments. A biz ops focus is also on streamlining and standardizing traditional application specific support activities and centralizing points of interaction for both internal and external partners by communicating effectively with all key stakeholders. Ultimately, the role of biz ops is to align Product and Customer Focused priorities with Operational needs. We regularly review our run state not only from an internal perspective, but also understanding and providing the feedback loop to our development partners on how we can improve the customer experience of our applications. All About You Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation and refinement. Analyze ITSM activities of the platform and provide feedback loop to development teams on operational gaps or resiliency concerns Support services before they go live through activities such as system design consulting, capacity planning and launch reviews. Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity. Support the application CI/CD pipeline for promoting software into higher environments through validation and operational gating, and lead Mastercard in DevOps automation and best practices. Practice sustainable incident response and blameless postmortems. Take a holistic approach to problem solving, by connecting the dots during a production event thru the various technology stack that makes up the platform, to optimize mean time to recover Work with a global team spread across tech hubs in multiple geographies and time zones Share knowledge and mentor junior resources Qualifications BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent practical experience. Experience with algorithms, data structures, scripting, pipeline management, and software design. Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. Ability to help debug and optimize code and automate routine tasks. We support many different stakeholders. Experience in dealing with difficult situations and making decisions with a sense of urgency is needed. Experience in one or more of the following is preferred: C, C++, Java, Python, Go, Perl or Ruby. Interest in designing, analyzing and troubleshooting large-scale distributed systems. We need team members with an appetite for change and pushing the boundaries of what can be done with automation. Experience in working across development, operations, and product teams to prioritize needs and to build relationships is a must. Experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort is desired. " Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 1 month ago

Apply

0 years

4 - 10 Lacs

Pune

On-site

Infra and DevOps Engineer, AS Job ID: R0391182 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-20 Location: Pune Position Overview Job Title: Infra and DevOps Engineer Location: Pune, India Corporate Title: AS Role Description The Infra & DevOps team within DWS India , sits horizontally over the project delivery, committed to provide best in class shared services across build, release and QA Automation space. Its’ main functional areas encompass Environment build, I ntegration of QA Automation suite, Release and Deployment Automation Management, Technology Management and Compliance Management. This role will be key to our programme delivery and include working closely with stakeholders including client Product Owners, Digital Design Organisation, Business Analysts, Developers and QA to advise and contribute from Infra and DevOps capability perspective by Building and maintaining non-prod and prod environments, setting up end to end alerting and monitoring for ease of operation and oversee transition of the project to L2 support teams as part of Go Live. What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Drives automation (incl. automated build, test and deploy) Supports and manages Data / Digital systems architecture (underlying platforms, APIs, UI, Datasets …) in line with the architectural vision set by the Digital Design Organisation across environments. Drives integration across systems, working to ensure service layer integrates with the core technology stack whilst ensuring that services integrate to form a service ecosystem Monitors digital architecture to ensure health and identify required corrective action Serve as a technical authority, working with developers to drive architectural standards on the specific platforms that are developing upon Build security into the overall architecture, ensuring adherence to security principles set within IT and adherence to any required industry standards Liaises with IaaS and PaaS service provides within the Bank to enhance their offerings. Liaises with other technical areas, conducting technology research, and evaluating software required for maintaining the development environment. Works with the wider QA function within the business to drive Continuous Testing by integrating QA Automation suites with available toolsets. Your skills and experience Proven technical hands on in Linux/Unix is a must have. Proven experience on Infrastructure Architecture - Clustering, High Availability, Performance and Tuning, Backup and Recovery. Hands-on experience with DevOps build and deploy tools like TeamCity, GIT / Bit Bucket / Artifactory and knowledge about automation/ configuration management using tools such as Ansible or relevant. A working understanding of code and scripting language such as (Python, Perl, Ruby or JS). In depth Knowledge and experience in Docker technology, OpenShift and Kubernetes containerisation Ability to deploy complex solutions based on IaaS, PaaS and public and private cloud-based infrastructures. Basic understanding of networking and firewalls. Knowledge of best practices and IT operations in an agile environment Ability to deliver independently: confidently able to translate requirements into technical solutions with minimal supervision Collaborative by nature: able to work with scrum teams, technical teams, the wider business, and IT&S to provide platform-related knowledge Flexible: finds a way to say yes and to make things happen, only exercising authority as needed to prevent the architecture from breaking Coding and scripting: Able to develop in multiple languages in order to mobilise, configure and maintain digital platforms and architecture. Automation and tooling: strong knowledge of the automation landscape, with ability to rapidly identify and mobilise appropriate tools to support testing, deployment etc Security: understands security requirements and can independently drive compliance Education / Certification Any relevant DevOps certification. Bachelor’s degree from an accredited college or university with a concentration in Science, Engineering, or an IT-related discipline (or equivalent). How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru

On-site

Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology, anywhere, on any device or app. Our flexible and neutral products, Okta Platform and Auth0 Platform, provide secure access, authentication, and automation, placing identity at the core of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. We are looking for a Staff Software engineer who is passionate about writing the tools to integrate and support to build large-scale, high-demand software in a fast-paced agile environment. You will share our passion for test-driven development, continuous integration and automation to produce frequent high-quality releases. Our engineering team is fast, innovative and flexible; with a weekly release cycle and individual ownership. We expect great things from our engineers and reward them with stimulating new projects, emerging technologies and the chance to be part of a company that is changing the cloud computing landscape forever. You will get an opportunity to work in scaling our infrastructure to next generation. Our scale is already huge in running tens of thousands of tests for every commit automatically. This comes with challenges in speed by reducing compute time from days to few minutes. Responsibilities: Major areas of responsibility include: You will be part of the team that builds, maintains, and improves our highly-automated build, release and testing infrastructure. Scripting, tools-building, and automation are paramount to Okta Engineering; everybody automates. You will be creating and coding tools for internal use to support continuous delivery. Team up with Development, QA and OPS to continuously innovate and enhance our build and automation infrastructure Collaborate with peers and stake-holders to create new tools/process/technology. We use the latest technology from AWS and you can experiment, recommend, and implement new technologies in our build and CI system. Work with internal customers to roll-out projects and process, monitor adoption, collect feedback, and fine-tune the project to respond to internal customers’ needs REQUIRED Knowledge, Skills, and Abilities: Experience in developing Continuous Delivery pipelines for a diverse set of projects using Java, Jenkins, AWS, Docker, Python, Ruby, Bash, and more Solid understanding of CI/CD release pipelines. Exposure to cloud infrastructures, such as AWS, GCP or Azure Experience working with Gradle, Bazel, Artifactory, Docker registry, npm registry Experience with AWS, its services, and its supporting tools (cost control, reporting, environment management). Ability to coordinate cross-functional work toward task completion. Experience in Kubernetes is a plus Education and Training: B.S. in CS or equivalent Okta is an Equal Opportunity Employer. #LI-Hybrid What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Okta The foundation for secure connections between people and technology Okta is the leading independent provider of identity for the enterprise. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 7,000 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. More than 19,300 organizations, including JetBlue, Nordstrom, Slack, T-Mobile, Takeda, Teach for America, and Twilio, trust Okta to help protect the identities of their workforces and customers.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies