Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity The objective of our Digital Risk Consulting service is to support clients with the development, implementation, improvement, and modernization of their technology risk and compliance programs to address the constantly changing risk and technology landscape. Our solutions can be used by our clients to build confidence and trust with their customers, the overall market, and when required by regulation or contract. Your Key Responsibilities You will operate as a team leader for engagements to help our clients develop and strengthen their IT risk and compliance programs. You will work directly with clients to review their IT processes and controls, remediate and implement controls, onboard new tools and services into risk and compliance frameworks, and assist with the readiness and adherence for new compliance regulations. Your responsibilities include both in-person and remote oversight and coaching of engagement team members, reporting to both senior engagement team members and client leadership, as well as partnering with our key client contacts to complete the engagement work. What You'll Do Designing and implementing solutions to various data related technical/compliance challenges such as DevSecOps, data strategy, data governance, data risks & relevant controls, data testing, data architecture, data platforms, data solution implementation, data quality and data security to manage and mitigate risk. Leveraging data analytics tools/software to build robust and scalable solutions through data analysis and data visualizations using SQL, Python and visualization tools Design and implement comprehensive data analytics strategies to support business decision-making. Collect, clean, and interpret large datasets from multiple sources, ensuring completeness, accuracy and integrity of data. Integrating and/or piloting next-generation technologies such as cloud platforms, machine learning and Generative AI (GenAI) Developing custom scripts and algorithms to automate data processing and analysis to generate insights Applying business / domain knowledge including regulatory requirements and industry standards to solve complex data related challenges Analyzing data to uncover trends and generate insights that can inform business decisions Build and maintain relationships across Engineering, Product, Operations, Internal Audit, external audit and other external stakeholders to drive effective financial risk management. Work with DevSecOps, Security Assurance, Engineering, and Product teams to improve efficiency of control environments and provide risk management through implementation of automation and process improvement Bridge gaps between IT controls and business controls, including ITGCs and automated business controls. Work with IA to ensure complete control environment is managed Work with emerging products to understand risk profile and ensure an appropriate control environment is established Implement new process and controls in response to changes to the business environment, such as new product introduction, changes in accounting standards, internal process changes or reorganization. What You'll Need Experience in data architecture, data management, data engineering, data science or data analytics Experience in building analytical queries and dashboards using SQL, noSQL, Python etc Proficient in SQL and quantitative analysis, you can deep dive into large amounts of data, draw meaningful insights, dissect business issues and draw actionable conclusions Knowledge of tools in the following areas: Scripting and Programming (e.g., Python, SQL, R, Java, Scala, etc) Big Data Tools (e.g., Hadoop, Hive, Pig, Impala, Mahout, etc) Data Management (e.g., Informatica, Collibra, SAP, Oracle, IBM etc) Predictive Analytics (e.g., Python, IBM SPSS, SAS Enterprise Miner, RPL, Matl, etc) Data Visualization (e.g., Tableau, PowerBI, TIBCO-Spotfire, CliqView, SPSS, etc) Data Mining (e.g., Microsoft SQL Server, etc) Cloud Platforms (e.g., AWS, Azure, or Google Cloud) Ability to analyze complex processes to identify potential financial, operational, systems and compliance risks across major finance cycles Ability to assist management with the integration of security practices in the product development lifecycle (DevSecOps) Experience with homegrown applications in a microservices/dev-ops environment Experience with identifying potential security risks in platform environments and developing strategies to mitigate them Experience with SOX readiness assessments and control implementation Knowledge of DevOps practices, CI/CD pipelines, code management and automation tools (e.g., Jenkins, Git, Phab, Artifactory, SonarQube, Selenium, Fortify, Acunetix, Prisma Cloud) Preferred: Experience in: Managing technical data projects Leveraging data analytics tools/software to develop solutions and scripts Developing statistical model tools and techniques Developing and executing data governance frameworks or operating models Identifying data risks and designing and/or implementing appropriate controls Implementation of data quality process Developing data services and solutions in a cloud environment Designing data architecture Analyzing complex data sets & communicating findings effectively Process management experience, including process redesign and optimization Experience in scripting languages (e.g., Python, Bash) Experience in cloud platforms (e.g., AWS, Azure, GCP) and securing cloud-based applications/services To qualify for the role, you must have A bachelor's or master's degree 1-3 years of experience working as an IT risk consultant or data analytics experience. Bring your experience in applying relevant technical knowledge in at least one of the following engagements: (a) risk consulting, (b) financial statement audits; (c) internal or operational audits, (d) IT compliance; and/or (e) Service Organization Controls Reporting engagements. We would expect for you to be available to travel outside of their assigned office location at least 50% of the time, plus commute within the region (where public transportation often is not available). Successful candidates must work in excess of standard hours when necessary. A valid passport is required. Ideally, you’ll also have A bachelor's or master's degree in business, computer science, information systems, informatics, computer engineering, accounting, or a related discipline CISA, CISSP, CISM, CPA or CA certification is desired; non-certified hires are required to become certified to be eligible for promotion to Manager. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: DevOps Engineer, AVP Location: Pune, India Role Description As a DevOps Engineer you will work as part of a multi-skilled agile team, dedicated to improved automation and tooling to support continuous delivery. Your team will work hard to foster increased collaboration and create a Devops culture. You will make a crucial contribution to our efforts to be able to release our software more frequently, efficiently and with less risk. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Work with other engineers to support our adoption of continuous delivery, automating the building, packaging, testing and deployment of applications. Create the tools required to deploy and manage applications effectively in production, with minimal manual effort Help teams to adopt modern delivery practices, such as extensive use of automated testing, continuous integration, more frequent releases, blue/green deployment, canary releases, etc. Configure and manage code repositories, continuous builds, artifact repositories, cloud platforms and other tools Contribute towards a culture of learning and continuous improvement within your team and beyond. Share skills and knowledge in a wide range of topics relating to Devops and software delivery Your Skills And Experience Good knowledge Springboot application (build and deployment). Setting up application on any cloud environment ( GCP will be plus) Extensive experience with configuration management tools: Ansible, Terraform, Docker, Helm or similar tools. Hands on experience on tools Like UDeploy for Automation Extensive experience in understanding networking concept e.g. Firewall, Loadbancing, data transfer Extensive experience building CI/CD pipelines using TeamCity or similar. Experience with a range of tools and techniques that can be used to make software delivery faster and more reliable such as experience in creating and maintaining automated builds, using tools such as Team City, Jenkins and Bamboo etc and using repositories such as Nexus and Artifactory to manage and distribute binary artefacts. Good knowledge of scripting languages, such as Maven, shell Or python. Good understanding of git version control systems, branching and merging, etc. Good understanding of Release Management, Change management concept. Experience working in an agile team, practicing Scrum, Kanban or XP or SAFe How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 2 months ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Senior Java Developer with proven experience building robust, high-performance, large scale Capital Markets applications. Team Background The Enterprise Risk Technology Team is responsible for delivering solutions for credit risk management. The tools enable risk analysts and risk managers to easily perform tasks that relate to credit financial analysis for various business groups. Key areas of focus are to extend financials reporting and analytics capabilities and build a self-service portal, build a service-oriented architecture for credit assessments for loans, extend the credit analysis process to ensure consistency with regulatory guidelines ** Replacement Hire - C level upgrade (C10 to C12)** This role has transitioned from a C10 individual contributor to an expanded C12 role with greater responsibilities and complexity. Previously, this role was focused on delivering sophisticated software requirements specifically for BOW, consent orders, and regulatory submission platforms. Now, the role has grown to support a wider range of critical module deliverables aligned with high-priority business needs like Sybase retirement , Customer coverage roles migration , Multiple CAP requirements , Enhance regulatory controls , Delivers regular BAU works , Extends support to Data Services etc ., This C12 role comes with greater responsibilities that include: Delivering Advanced Technical Solutions: Tackling increasingly complex technical challenges to meet business goals across multiple modules. Coordinating Across Systems: Ensuring efficient collaboration between all upstream and downstream systems to maintain seamless operations and integration. Engaging Stakeholders: Facilitating clear, effective communication among stakeholders for timely and successful business deliverables. Risk and Control Focus: Maintaining a proactive approach to risk management and ensuring strict adherence to controls to safeguard business operations. User Communication and Support: Managing direct user interactions, addressing critical business-related queries, and offering technical guidance on project requirements. Contributions to encompass leadership across technical, operational, and risk management areas, reflecting a higher level of responsibility and strategic impact within the organization Who you are: You’ve got positive energy. You are optimistic about the future and determined to get there. You appreciate open and direct communication. You are both – an active communicator and an eager listener. You can switch context & pivot on the fly. Our group is a horizontal organization and regulations are constantly changing. What you worked on yesterday may not be what you work on today. You should be flexible to accommodate the constantly changing regulatory landscape into your projects by translating the business requirements into technical specifications. You want to be part of a winning team. We build & grow with one another and you’re a person who does not shy away from being pushed out of your comfort zone. You are often cited as inspiration for the junior colleagues, and they feel that they can learn something from you and you have a “can do” attitude. We assure that data of pristine quality is available to the business users for filing various Regulatory reports. Owning a problem does not scare you but empowers you to take 100% ownership. Qualifications: 10+ years Core Java experience developing robust, scalable, and maintainable applications applying Object Oriented Design principles. Strong knowledge and hands on experience in Java (Version 1.8 or above) Integration with SSO such as OAUTH and Expertise in Spring Batch, Spring IOC, Spring Annotation, Spring Security Hands-on experience in REST-APIs, and Backend using Java/J2EE technologies Should possess Unix/Linux knowledge to be able to write and understand shell scripts and commands Have an experience in Data Preparation Tools Experience with CI/CD build pipelines and toolchain – Git, Bitbucket, TeamCity, Artifactory, Jira Experience. Strong knowledge of CI/CD pipelines and experience in tools such as JIRA, BlackDuck, SONAR etc. Distributed Caching frameworks such as Ignite, Hazlecast, Redis or equivalent. Cloud computing technologies with practical experience working with containers, microservices and large datasets (Docker, Kubernetes). Demonstrated capacity to build sophisticated tooling for development and production team use. Experience re-engineering large monolithic applications to microservices. certification-Sun Certified Java Developer, AWS , Kubernetes ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 months ago
8.0 - 12.0 years
0 Lacs
India
On-site
Job description Role- DevOps Lead Experience- 8-12 years Location: Pan India(Hybrid) Act under guidance of DevOps; leading more than 1 Agile team. Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures of Outcomes: Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected: Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured components: Configure tools and automation framework into the overall DevOps design Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness: Deployment frequency innovation and technology changes. Operations: Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples: Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments: 8 to 12 years of experience candidate who has strong knowledge on below skills: • Terraform • Using terraform Modules • Deploying AWS Infrastructure (Setting up IAC), especially following services • EKS, ECS, AWS API Gateway, ALB, NLB, Route 53, s3 etc • Experience around Build and Deploy setting up CICD Pipelines • Artifactory • Branching strategy • Harness (Optional) Skills Iac,Jenkins,Aws Cloud Mandatory Skills - DevOps, AWS, IAC, Terraform, Kubernetes, Jenkins, CI/CD, Docker. Technical Lead having experience in leading and Hands-on experience of AWS cloud and DevOps. Develop and manage the project, communication. Strong problem-solving skills and the ability to think critically under pressure. Excellent communication and collaboration skills, with the ability to work effectively across teams. Ability to mentor and coach junior team members, fostering growth and continuous learning. Detail-oriented with a strong sense of ownership and accountability Show more Show less
Posted 2 months ago
3.0 - 6.0 years
5 - 9 Lacs
Pune
Hybrid
Role & responsibilities Job Description: We are seeking a highly skilled DevOps Engineer to join our dynamic team . The ideal candidate will bring deep expertise in DevOps practices, toolchains, and cloud infrastructure, with a passion for automation and continuous improvement. Key Responsibilities: Design, implement, and manage CI/CD pipelines using Jenkins , integrating with tools like Bitbucket , SonarQube , and JFrog Artifactory . Develop and maintain Shell scripts for automation and system management. Manage and optimize ELK Stack (Elasticsearch, Logstash, Kibana) for monitoring and logging. Use Git and GitHub for version control and collaboration. Deploy and manage infrastructure using Ansible , Docker , Kubernetes , Terraform , and AWS Cloud Services . Communicate effectively across cross-functional teams to understand and meet infrastructure needs. Required Skills & Experience: Strong hands-on experience in Shell scripting . Proven experience in building and managing CI/CD pipelines . Expertise with Jenkins , Artifactory (JFrog) , SonarQube , and Bitbucket . Proficient in Git and GitHub . Experience with ELK Stack for system monitoring and log management. Solid working knowledge of Ansible , Docker , Kubernetes , Terraform , and AWS Cloud . Excellent verbal and written communication skills
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary BizOps Engineer II Overview Job Description: The Financial Solutions BizOps team is looking for a Site Reliability Engineer who can help us solve problems, build our CI/CD pipeline and lead Mastercard in DevOps automation and best practices. Are you a born problem solver who loves to figure out how something works? Are you a CI/CD geek who loves all things automation? Do you have a low tolerance for manual work and look to automate everything you can? Business Operations is leading the DevOps transformation at Mastercard through our tooling and by being an advocate for change & standards throughout the development, quality, release, and product organizations. We need team members with an appetite for change and pushing the boundaries of what can be done with automation. Experience in working across development, operations, and product teams to prioritize needs and to build relationships is a must. Role The role of business operations is to be the production readiness steward for the platform. This is accomplished by closely partnering with developers to design, build, implement, and support technology services. A business operations engineer will ensure operational criteria like system availability, capacity, performance, monitoring, self-healing, and deployment automation are implemented throughout the delivery process. Business Operations plays a key role in leading the DevOps transformation at Mastercard through our tooling and by being an advocate for change and standards throughout the development, quality, release, and product organizations. We accomplish this transformation through supporting daily operations with a hyper focus on triage and then root cause by understanding the business impact of our products. The goal of every biz ops team is to shift left to be more proactive and upfront in the development process, and to proactively manage production and change activities to maximize customer experience, and increase the overall value of supported applications. Biz Ops teams also focus on risk management by tying all our activities together with an overarching responsibility for compliance and risk mitigation across all our environments. A biz ops focus is also on streamlining and standardizing traditional application specific support activities and centralizing points of interaction for both internal and external partners by communicating effectively with all key stakeholders. Ultimately, the role of biz ops is to align Product and Customer Focused priorities with Operational needs. We regularly review our run state not only from an internal perspective, but also understanding and providing the feedback loop to our development partners on how we can improve the customer experience of our applications. All About You For all team members: Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation and refinement. Analyze ITSM activities of the platform and provide feedback loop to development teams on operational gaps or resiliency concerns Support services before they go live through activities such as system design consulting, capacity planning and launch reviews. Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity. Support the application CI/CD pipeline for promoting software into higher environments through validation and operational gating, and lead Mastercard in DevOps automation and best practices. Practice sustainable incident response and blameless postmortems. Take a holistic approach to problem solving, by connecting the dots during a production event thru the various technology stack that makes up the platform, to optimize mean time to recover Work with a global team spread across tech hubs in multiple geographies and time zones Share knowledge and mentor junior resources For Team Members Supporting The Dev Ops Pipeline Design, implement, and enhance our deployment automation based on Chef. We need proven experience writing chef recipes/cookbooks as well as designing and implementing an overall Chef based release and deployment process. Use Jenkins to orchestrate builds as well as link to Sonar, Chef, Maven, Artifactory, etc. to build out the CI/CD pipeline. Support deployments of code into multiple lower environments. Supporting current processes needed with an emphasis on automating everything as soon as possible. Design and implement a Git based code management strategy that will support multiple environment deployments in parallel. Experience with automation for branch management, code promotions, and version management is a plus. Qualifications BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent practical experience. Experience with algorithms, data structures, scripting, pipeline management, and software design. Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. Ability to help debug and optimize code and automate routine tasks. We support many different stakeholders. Experience in dealing with difficult situations and making decisions with a sense of urgency is needed. Experience in one or more of the following is preferred: C, C++, Java, Python, Go, Perl or Ruby. Interest in designing, analyzing and troubleshooting large-scale distributed systems. We need team members with an appetite for change and pushing the boundaries of what can be done with automation. Experience in working across development, operations, and product teams to prioritize needs and to build relationships is a must. For work on our dev ops team, engineer with experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort is required. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247717 Show more Show less
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
At Protegrity, we lead innovation by using AI and quantum-resistant cryptography to transform data protection across cloud-native, hybrid, on-premises, and open source environments. We leverage advanced cryptographic methods such as tokenization, format-preserving encryption, and quantum-resilient techniques to protect sensitive data. As a global leader in data security, our mission is to ensure that data isn’t just valuable but also usable, trusted, and safe. Protegrity offers the opportunity to work at the intersection of innovation and collaboration, with the ability to make a meaningful impact on the industry while working alongside some of the brightest minds. Together, we are redefining how the world safeguards data, enabling organizations to thrive in a GenAI era where data is the ultimate currency. If you're ready to shape the future of data security, Protegrity is the place for you. As a Software Engineer for the DevOps Team, you will be responsible for implementing and managing the DevOps approach within an organization. This role is crucial in bridging the gap between development and operations teams, ensuring seamless collaboration and efficient delivery of high-quality software products. You should know how to design, implement, and maintain tools and processes for continuous integration, delivery, and deployment (CI/CD) of software using various cloud platforms. The primary goal is to automate repetitive tasks, reduce manual intervention, and improve the overall user experience, quality, and reliability of software products. Responsibilities: Design, develop, test, and maintain CI/CD pipelines, and maintain continuous integration, delivery, and deployment (CI/CD) processes using tools like Jenkins, Git, Artifactory, Docker, etc. Define and set development, test, release, update, and support processes for DevOps operations. Automate, manage, and optimize the deployment and infrastructure of applications on public clouds like AWS, Azure, GCP. Collaborate with development and operations teams to identify and address bottlenecks in the software development lifecycle. Troubleshoot and resolve issues related to application development, deployment, and operations. Monitor and manage infrastructure, ensuring optimal performance, security, and scalability. Understand how to choose the best tools and technologies that best fit the business needs. Qualifications: BE/ME/MCA or any other degree in a related field. 3-7 years of relevant DevOps experience. Hands-on experience with various DevOps concepts and tools like Continuous Integration (CI) and Continuous Delivery (CD), Git, Jenkins, SonarQube, Artifactory/Nexus. Strong knowledge of programming languages, such as Python, Ruby, or Java. Experience in building, designing, and maintaining cloud-based applications with AWS, Azure, GCP, etc. Knowledge of container and container orchestration tools like Docker, AWS ECS, Kubernetes. Experience working on Linux-based infrastructure. Excellent communication and collaboration skills, as well as the ability to work effectively in cross-functional teams. Knowledge of data protection, privacy, and security domains Why Choose Protegrity: Become a member of a leading Data Protection, Privacy and Security company during one of the best market opportunities to come along in a generation. Competitive Compensation/Total Reward Packages Paid Time Off (PTO) Work on global projects with diverse, energetic, team members who respect each other and celebrate differences. Remote workforce We offer a competitive salary and comprehensive benefits with generous vacation and holiday time off. All employees are also provided access to ongoing learning & development. Ensuring a diverse and inclusive workplace is our priority. We are committed to an environment of acceptance where you are free to bring your full self to work. All qualified applicants and current employees will not be discriminated against on the basis of race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability or veteran status. Please reference Section 12: Supplemental Notice for Job Applicants in our Privacy Policy to inform you of the categories of personal information that we collect from individuals who inquire about and/or apply to work for Protegrity USA, Inc., or its parent company, subsidiaries or affiliates, and the purposes for which we use such personal information. Show more Show less
Posted 2 months ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Company We’re Hitachi Vantara, the data foundation trusted by the world’s innovators. Our resilient, high-performance data infrastructure means that customers – from banks to theme parks – can focus on achieving the incredible with data. If you’ve seen the Las Vegas Sphere, you’ve seen just one example of how we empower businesses to automate, optimize, innovate – and wow their customers. Right now, we’re laying the foundation for our next wave of growth. We’re looking for people who love being part of a diverse, global team – and who get excited about making a real-world impact with data. The Role We are looking for a skilled DevOps Engineer with a good background in Python, GitHub administration, and Artifactory management. The ideal candidate needs to have good knowledge of CI/CD pipeline best practices, artifact storage management, GitHub administration, and proficient coding skills in Python. What You’ll Bring Understand business and product needs and manage our global GitHub instance serving all our product engineering teams. What You Will Need Design, build, and execute on artifact storage strategy in a scalable and efficient manner. Communicate and collaborate with engineering/cross-functional teams to implement a feedback mechanism to optimize Artifactory usage. Design, build, and maintain complex python applications. Bachelors Degree in Engineering or equivalent, with 5-8 years of experience in managing CI/CD pipelines, source code repositories, artifact storage management, and software development. Enthusiastic learner and skilled in both theory and practice of building and maintaining Python applications in the Django framework. Experience managing Artifactory and GitHub Enterprise cloud applications across multiple large engineering teams. Work closely with data engineers, software developers, and other stakeholders to integrate the solutions into existing systems with systemic feedback and continuous training and optimization. Technical Skills Mastery of Python programming language. Proficient with Linux administration skills, especially with command line. Proficient with containerization technologies. Proficient with Grafana About Us We’re a global team of innovators. Together, we harness engineering excellence and passion for insight to co-create meaningful solutions to complex challenges. We turn organizations into data-driven leaders that can a make positive impact on their industries and society. If you believe that innovation can inspire the future, this is the place to fulfil your purpose and achieve your potential. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We’re also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We’re always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you’ll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. We’re proud to say we’re an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Technical Competencies: Experience designing and building web environments on AWS, which includes handful working experience with services like EC2, ECS, ELB, RDS, S3, Containers (Dockers, etc.) and AWS Transfer Family. Experience building and maintaining cloud-native applications Application performance monitoring, Dynamo DB, Route 53, Lambda, etc. A solid background in Linux/Unix and Windows server system administration Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus An understanding of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform Knowledge of one or more of the most-used programming languages available for today’s cloud computing (i.e., SQL data, XML data, R math, Clojure math, Haskell functional, Erlang functional, Python procedural, and Go procedural languages) Experience in troubleshooting distributed systems. Exposure to file transmission services of AWS for SFTP, FTPS and AS2 protocols. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Technical Competencies: Experience designing and building web environments on AWS, which includes handful working experience with services like EC2, ECS, ELB, RDS, S3, Containers (Dockers, etc.) and AWS Transfer Family. Experience building and maintaining cloud-native applications Application performance monitoring, Dynamo DB, Route 53, Lambda, etc. A solid background in Linux/Unix and Windows server system administration Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus An understanding of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform Knowledge of one or more of the most-used programming languages available for today’s cloud computing (i.e., SQL data, XML data, R math, Clojure math, Haskell functional, Erlang functional, Python procedural, and Go procedural languages) Experience in troubleshooting distributed systems. Exposure to file transmission services of AWS for SFTP, FTPS and AS2 protocols. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Core Technology Abacus Engineer Location: Pune, India Corporate Title: AVP Role Description DB Technology is a global team of tech specialists, spread across multiple trading hubs and tech centres. We have a strong focus on promoting technical excellence – our engineers work at the forefront of financial services innovation using cutting-edge technologies. IB Core Technology was formed to engineering the best common platforms and services for the Investment Bank, supporting the demands of the business. We also lead and consult on data and architecture strategies for the division and across Technology. We are looking for an engineer to join this exciting and innovating department. ABACUS We are looking for an engineer to join our Abacus application team. Abacus is a critical application used across the Investment Bank to implement and enforce regulatory driven Preventative Controls in the Trading Businesses. Abacus is also used for user authorisation. As a Java Engineer, you will work alongside other talented engineers and bring modern technologies and approaches into supporting multiple business domains across Deutsche Bank. It is a fast-paced team with huge business impact. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: Design and develop new functionality on Java and optionally Kotlin Work with the business and BAs on functional and non-functional requirements Co-working with QA team on test automation of new and existing functionality Investigate and fix production incidents, drive stability and monitoring improvements Design and implement subsystems and intersystem protocols Strong communication and problem solving skills Your Skills and Experience: Several years of experience and excellent knowledge of Java Kotlin as a nice to have Proficiency in high-load distributed systems design: knowledge of consistency models, API types and tradeoffs Good knowledge of algorithms, complexity, data structures Basic knowledge of SQL and relational databases Basic knowledge of of modern Cloud technologies, i.e. k8s, ArgoCD, Grafana Tanka Basic knowledge of Linux Experience with modern SDLC tools - Git, JIRA, Artifactory, Jenkins is a plus How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role: Grade Level (for internal use): 11 About The Role: We are looking for a Cloud/DevOps Engineer to join the KY3P team, to manage and automate custodian policies and systems administration in the AWS Cloud environment. The role offers extensive technical challenges in a highly dynamic and collaborative work environment. A passion for quality and a sense of pride in your work are an absolute must for the role. You will build solutions to migrate services, automate resource provisioning and administration of infrastructure in AWS Cloud for KY3P applications. What You'll Work On: Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS using Terraform. Develop automation for provisioning compute instances and storage. Provision resources in AWS using Cloud Formation Templates and Orchestrate container deployment. Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with GitHub. Create comprehensive documentation and provide technical guidance. Effectively interact with global customers, business users and IT employees What We Look For : B Tech./ M Tech / MCA degree in an IT/ Computer Science or related course is a prerequisite. 8+ years of hands-on professional experience in Infrastructure Engineering and automation Experience in AWS Cloud systems administration. Excellent communication skills and ability to thrive in both team-based and independent environments. What You Need To Get The Job Done Candidates should have a minimum of 8+ years industry experience in cloud and Infrastructure. Expertise in using DevOps tools Terraform, GitHub, Artifactory etc. Cloud engineering certifications (AWS, Terraform) are desirable. Deep understanding of networking and application architecture needs for system migrations. Proficiency in scripting languages: Python, PowerShell, Bash. Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution. Excellent verbal and written communication skills About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 315707 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Role: Grade Level (for internal use): 11 About The Role: We are looking for a Cloud/DevOps Engineer to join the KY3P team, to manage and automate custodian policies and systems administration in the AWS Cloud environment. The role offers extensive technical challenges in a highly dynamic and collaborative work environment. A passion for quality and a sense of pride in your work are an absolute must for the role. You will build solutions to migrate services, automate resource provisioning and administration of infrastructure in AWS Cloud for KY3P applications. What You'll Work On: Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS using Terraform. Develop automation for provisioning compute instances and storage. Provision resources in AWS using Cloud Formation Templates and Orchestrate container deployment. Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with GitHub. Create comprehensive documentation and provide technical guidance. Effectively interact with global customers, business users and IT employees What We Look For : B Tech./ M Tech / MCA degree in an IT/ Computer Science or related course is a prerequisite. 8+ years of hands-on professional experience in Infrastructure Engineering and automation Experience in AWS Cloud systems administration. Excellent communication skills and ability to thrive in both team-based and independent environments. What You Need To Get The Job Done Candidates should have a minimum of 8+ years industry experience in cloud and Infrastructure. Expertise in using DevOps tools Terraform, GitHub, Artifactory etc. Cloud engineering certifications (AWS, Terraform) are desirable. Deep understanding of networking and application architecture needs for system migrations. Proficiency in scripting languages: Python, PowerShell, Bash. Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution. Excellent verbal and written communication skills About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 315707 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Mauganj, Madhya Pradesh, India
On-site
Join us as we pursue our ground-breaking vision to make machine data accessible, usable, and valuable to everyone. We are a company filled with people who are passionate about our product and seek to deliver the best experience for our customers. At Splunk, we are committed to our work, customers, having fun, and most significantly to each other’s success. The Splunk Observability Cloud provides full-fidelity monitoring and fixing across infrastructure, applications, and user interfaces, in real-time and at any scale, to help our customers keep their services reliable, innovate faster, and deliver great customer experiences. Infrastructure Software Engineers at Splunk are cloud-native systems engineers who use infrastructure-as-code, microservices, automation, and efficient design to build, operate, and scale our products. Role You will help us run one of the largest and most sophisticated cloud-scale, big data, and microservices platforms in the world. You will be responsible to maintain and provision the base infrastructure that runsSplunk Observability Cloud, including cloud compute platforms, managed services, Kubernetes, and required tooling. You will be a key part of the team that strives to offer a fully automated and self-serve, secure, performant, compliant, and cost-efficient Platform-as-a-Service that follows cloud-native best practices and empowers product teams to easily and quickly deploy and operate workloads. You are passionate about automation, infrastructure-as-code, microservices, and getting rid of tedious, manual tasks. You Will Design new services, tools, and monitoring to be implemented by the entire team. Analyze the tradeoffs of the proposed design and make recommendations based on these tradeoffs. Mentor new engineers to achieve more than they thought possible. You enjoy making other teams successful and are fulfilled through the success of others. You Will Work On Infrastructure Projects, Including Adopting new cloud-native frameworks and services Automating cloud provider infrastructure via Terraform, Kubernetes, and Helm Developing code for tools and automation to reduce manual tasks and reduce human error Establishing and documenting run books and guidelines for using the multi-cloud infrastructure and microservices platform. Improve the resiliency of multi-cloud microservices platform Networking, routing, and load balancing Security vulnerability remediation and patching automation Automating deployment of our services in new provider zones/regions Designing and productionizing access tiers to provide appropriate permissions across roles Qualifications Must-Have: 5+ years of solid hands-on cloud infrastructure experience on public cloud platforms specifically AWS or GCP. 3+ years of strong hands-on experience deploying, managing, and monitoring large-scale Kubernetes clusters in the public cloud Experience with Infrastructure-as-Code using Terraform and/or Helm. Experience with infrastructure automation and scripting using Python and/or bash scripting. Knowledge of microservices fundamentals including Service Mesh using Istio, service discovery, deployment strategies, monitoring, scheduling, and load balancing Excellent problem-solving, triaging, and debugging skills in large-scale distributed systems Preferred AWS Solutions Architect certification preferred. CKA and HashiCorp-certified Terraform Associate certifications preferred Experience with Infrastructure-as-Code using Terraform, CloudFormation, Google Deployment Manager, Packer, Pulumi, ARM, etc. Experience with developing infrastructure or platform services using GoLang or Python. Experience with CI/CD frameworks and Pipeline-as-Code such as Jenkins, Spinnaker, Gitlab, Argo, Artifactory, etc. Exposure to monitoring tools such as Splunk, Prometheus, Grafana, ELK stack, etc. in order to build observability for large-scale microservice deployments. Proven skills to effectively work across teams and functions to influence the design, operations, and deployment of highly available software. Bachelors/Masters in Computer Science, Engineering, or related technical field, or equivalent practical experience. We value diversity, equity, and inclusion at Splunk and are an equal employment opportunity employer. Qualified applicants receive consideration for employment without regard to race, religion, color, national origin, ancestry, sex, gender, gender identity, gender expression, sexual orientation, marital status, age, physical or mental disability or medical condition, genetic information, veteran status, or any other consideration made unlawful by federal, state, or local laws. We consider qualified applicants with criminal histories, consistent with legal requirements. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Job Purpose ICE Mortgage Technology is the foundation of our success as we streamline, revitalize, and transform industries. Our cutting-edge technology creates opportunity for our customers - and for our people. As the largest mortgage eco-system, we’ve digitized and streamlined the entire mortgage process for lenders and home buyers, reducing the cost of home ownership. ICE PPE Engineering team is seeking a senior engineer with extraordinary technical skills and real passion for innovation to help us develop the next generation mortgage automation solution. This position involves building highly scalable and robust software in a polyglot environment. Working cross functionally to carry out, define and evaluate technical solutions, as well as designing and implementing technical solutions to meet business and market requirements. Responsibilities The Senior Software Engineer acts as a technical lead to develop robust, best in class software for the Enterprise Product and Pricing System. The Senior Software Engineer will work on software development projects from initial design through testing, with attention to details. This role will come with solid experience in emerging and traditional technologies including, node.js, React, .NET, C#, REST, JSON, XML, HTML / HTML5, CSS, relational databases and AWS/Cloud Infrastructure etc. As a senior engineer you will play an integral role in ensuring that ICE designs, implements, and maintains secure coding practices to the highest security standards. In addition, this role includes: Product Development - Support the Software Development Lifecycle from design review through testing. Agile Methodology - Responsible for leading software enhancements, defect corrections, integrations of features through incremental releases using agile principles. Secure Design - Work with the team to establish security requirements early in the SDLC and contribute security subject matter expertise during the development of new projects and releases. Tools Management - Focus on automation while implementing, maintaining, and integrating cutting-edge technologies to ensure software is scalable with optimal performance. Developer Growth - Write sustainable software by ensuring all functionality/features have detailed documentation. Design innovative software solution to improve performance and scalability. Able to work effectively in a team environment, as well as cross functionally. Knowledge And Experience Experience in designing and developing enterprise software, including microservices. Knowledge of Mortgage pricing process and principles. Experience with REST architectural patterns and experience in building RESTful services. Experience in ORM framework , relational databases, including writing complex sql queries. Deep knowledge of industry standards and best practices for large, complex platforms and software. Experience in GIT version control systems, with some familiarity of TFS. Experience in supporting CI/CD pipelines utilizing Jenkins, Artifactory, and similar toolsets. Master’s degree in computer science, Engineering, MIS, CIS, or equivalent experience. 7+ years of enterprise software development experience. Show more Show less
Posted 2 months ago
4.0 - 8.0 years
10 - 14 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E / MCA / B.Tech / MTECH /MS Graduation (Minimum 16 years of formal education, Correspondence courses are not relevant) 2+ years of experience on Azure database offering like SQL DB, Postgres DB, constructing data pipelines using Azure data factory, design and development of analytics using Azure data bricks and snowpark 2+ years of experience on Cloud based DWSnowflake, Azure SQL DW 2+ years of experience in data engineering and working on large data warehouse including design and development of ETL / ELT 3+ years of experience in constructing large and complex SQL queries on terabytes of warehouse database system Good Knowledge on Agile practices - Scrum, Kanban Knowledge on Kubernetes, Jenkins, CI / CD Pipelines, SonarQube, Artifactory, GIT, Unit Testing Main tech experience Dockers, Kubernetes and Kafka DatabaseAzure SQL databases Knowledge on Apache Kafka and Data Streaming Main tech experience Terraform and Azure.. Ability to identify system changes and verify that technical system specifications meet the business requirements Solid problem solving, analytical kills, Good communication and presentation skills, Good attitude and self-motivated Solid problem solving, analytical kills Proven good communication and presentation skills Proven good attitude and self-motivated Preferred Qualifications 2+ years of experience on working with cloud native monitoring and logging tool like Log analytics 2+ years of experience in scheduling tools on cloud either using Apache Airflow or logic apps or any native/third party scheduling tool on cloud Exposure on ATDD, Fortify, SonarQube Unix scripting, DW concepts, ETL FrameworksScala / Spark, DataStage At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 months ago
2.0 - 6.0 years
6 - 11 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E / MCA / B.Tech / MTECH /MS Graduation (Minimum 16 years of formal education, Correspondence courses are not relevant) 2+ years of experience on Azure database offering like SQL DB, Postgres DB, constructing data pipelines using Azure data factory, design and development of analytics using Azure data bricks and snowpark 3+ years of experience in constructing large and complex SQL queries on terabytes of warehouse database system 2+ years of experience on Cloud based DWSnowflake, Azure SQL DW 2+ years of experience in data engineering and working on large data warehouse including design and development of ETL / ELT Good Knowledge on Agile practices - Scrum, Kanban Knowledge on Kubernetes, Jenkins, CI / CD Pipelines, SonarQube, Artifactory, GIT, Unit Testing Main tech experience Dockers, Kubernetes and Kafka DatabaseAzure SQL databases Knowledge on Apache Kafka and Data Streaming Main tech experience Terraform and Azure.. Ability to identify system changes and verify that technical system specifications meet the business requirements Solid problem solving, analytical kills, Good communication and presentation skills, Good attitude and self-motivated Solid problem solving, analytical kills Proven good communication and presentation skills Proven good attitude and self-motivated Preferred Qualifications 2+ years of experience on working with cloud native monitoring and logging tool like Log analytics 2+ years of experience in scheduling tools on cloud either using Apache Airflow or logic apps or any native/third party scheduling tool on cloud Exposure on ATDD, Fortify, SonarQube Unix scripting, DW concepts, ETL FrameworksScala / Spark, DataStage At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 months ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate vice president level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of at least 8 years in software development (with any BPM tool preferably Camunda Experience in multiple programming languages such as Java, SpringBoot (Data, Integration, Web), JPA, Camunda, JBPM, Activiti Experience of Single Page Application, Microservice Development, Cloud Development – Cloud Foundry, AWS. A background in Git command line and Bitbucket or Stash UI, Artifactory, JIRA, Confluence Experience of Domain Driven Design & Proficiency in JavaScript React Show more Show less
Posted 2 months ago
4.0 - 9.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled Site Reliability Engineer (SRE) with experience to join our team in Bangalore. The ideal candidate will excel in implementing SRE principles to foster a culture of reliability, automation, and monitoring across our software engineering projects. This role is pivotal in ensuring the effective design, development, testing, and support of applications and systems, particularly within cloud environments. Software Requirements: Required Proficiency: Programming LanguagesTypeScript, Node.js Cloud EnvironmentsAWS (ECS Fargate, Vault, Lambda services, Artifactory) CI/CD ToolsGitHub Actions, JFrog Artifactory, Sysdig, Octopus, Terraform Observability ToolsObStack, Prometheus, Grafana, PagerDuty, Observe Infrastructure as Code (IaC) ToolsCloudFormation, Terraform Preferred Proficiency: Familiarity with additional programming languages or frameworks Experience with cloud platforms other than AWS Overall Responsibilities: Partner with senior stakeholders to lead a culture focused on data-driven reliability, monitoring, and automation in alignment with SRE principles. Design, develop, test, and support applications and systems, emphasizing managing and scaling distributed systems across cloud environments. Create and develop tools essential for the operational management and security of software applications and systems. Identify technology limitations and deficiencies in existing systems and implement scalable improvements. Drive automation efforts and enhance application monitoring capabilities. Review code developed by other engineers to ensure adherence to best practices. Thrive in incident response environments, conducting post-mortem analyses and designing secure solutions. Measure and optimize system performance, addressing customer needs and innovating for continuous improvement. Technical Skills (By Category): Programming Languages: Required: TypeScript, Node.js Cloud Technologies: Required: AWS (ECS Fargate, Lambda, Vault, Artifactory) Development Tools and Methodologies: Required: GitHub Actions, JFrog Artifactory, Sysdig, Octopus, Terraform Observability Tools: Required: ObStack, Prometheus, Grafana, PagerDuty, Observe Infrastructure as Code (IaC): Required: CloudFormation, Terraform Experience Requirements: 7 to 10 years of experience in software engineering and SRE practices. Experience in applying SRE practices in large organizations. Familiarity with modern software development practices and DevSecOps environments. Day-to-Day Activities: Collaborate with stakeholders to understand business needs and implement SRE practices. Lead cross-functional teams in enhancing system reliability and performance. Develop and maintain operational management tools for applications. Conduct regular code reviews and ensure adherence to best practices. Participate in incident response and post-mortem analysis to improve system resilience. Qualifications: Required: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Commitment to continuous professional development through industry certifications and training. Professional Competencies: Strong critical thinking and problem-solving skills. Excellent leadership and teamwork abilities. Effective communication and stakeholder management skills. Adaptability and a learning-oriented mindset. Innovative thinking to drive continuous improvement. Strong time and priority management skills.
Posted 2 months ago
170.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. 1. Job Title – Platform Engineer 2. Location: Pune/Bangalore 3. Educational Background – BE/Btech 4. Key Responsibilities – Should have good knowledge in Kubernetes platforms like (Red Hat, Openshift, ARO, AKS, EKS, etc Red Hat/Istio Service Mesh Jenkins, GitHub, ArgoCD, Azure DevOps Artifactory Azure services Application servers (JBoss, JWS) Web servers (Apache, Nginx) Good experience in Docker and Podman Good knowledge and experience in Terraform, Ansible, and Python Show more Show less
Posted 2 months ago
170.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. 1. Job Title - Devops Engineer 2. Location: Pune/Bangalore 3. Educational Background – BE/Btech 4. Key Responsibilities – Experience writing script languages in Helm, Groovy, Python, PowerShell, MS Build and Terraform. Extensive hands-on to integrating Maven, Gradle, package managers like NPM and JSPM, build tools to application builds and troubleshooting build issues. Prior experience with various technical container stacks Apache, Jboss, node module, tomcat, Spring boot, Docker, and RedHat container tools like Skopeo, buildah, and Podman. Experience administering and working with Azure DevOps, GitHub/Bit Bucket/GitLab, Jenkins, SonarQube, JFROG Artifactory/Nexus and SecOps tools like Synk is required. Strong knowledge of Container orchestration Platform like Kubernetes, OpenShift, AKS Knowledge of Dynatrace, Prometheus, Grafana, and Kiali is a plus. Experience working in the Ansible Automation platform would be a plus. Ability to work in a fast-paced environment, frequently collaborating with multiple teams for solutions support debugging, and troubleshooting. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: • Design, implement, and maintain secure CI/CD pipelines using Jenkins, GitHub Actions, and other automation tools. • Integrate DevSecOps best practices into the software development process, focusing on secure code delivery and deployment. • Implement and manage security tools such as SonarQube, Fortify, Prisma Cloud, or similar for static and dynamic code analysis. • Administer artifact repositories like Nexus or JFrog Artifactory to manage build dependencies and artifacts. • Collaborate with development, security, and operations teams to streamline secure software delivery processes. • Develop custom integrations between CI/CD tools and security scanning tools, ensuring compliance with enterprise security policies. • Leverage AWS services (EC2, S3, IAM, Lambda, CloudFormation, etc.) to support infrastructure automation and deployment. • Monitor pipeline health and automate feedback loops to improve quality, security, and efficiency. • Champion infrastructure-as-code and configuration management using tools like Terraform, Ansible, or CloudFormation. • Act as a key contributor within the Enterprise CI/CD Engineering team, contributing to platform evolution, best practices, and governance. ⸻ Required Skills: • strong experience in a DevOps/DevSecOps engineering role. • Strong expertise with Jenkins, GitHub, SonarQube, Nexus/JFrog, and CI/CD pipeline automation. • Hands-on experience with AWS cloud services and infrastructure as code (IaC). • Proficiency with integrating and managing security tools like Fortify, Prisma Cloud, Snyk, or similar. • Experience creating custom scripts and integrations using Python, Bash, Groovy, or equivalent languages. • Solid understanding of containerization and orchestration tools (Docker, Kubernetes is a plus). • Strong problem-solving skills and experience in troubleshooting complex CI/CD or security integration issues. • Prior experience working in or supporting an Enterprise CI/CD Engineering team is highly desirable. Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
About the Role: Grade Level (for internal use): 11 About the Role: We are looking for a Cloud/DevOps Engineer to join the KY3P team, to manage and automate custodian policies and systems administration in the AWS Cloud environment. The role offers extensive technical challenges in a highly dynamic and collaborative work environment. A passion for quality and a sense of pride in your work are an absolute must for the role. You will build solutions to migrate services, automate resource provisioning and administration of infrastructure in AWS Cloud for KY3P applications. What you'll work on: Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS using Terraform. Develop automation for provisioning compute instances and storage. Provision resources in AWS using Cloud Formation Templates and Orchestrate container deployment. Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with GitHub. Create comprehensive documentation and provide technical guidance. Effectively interact with global customers, business users and IT employees What we look for : B Tech./ M Tech / MCA degree in an IT/ Computer Science or related course is a prerequisite. 8+ years of hands-on professional experience in Infrastructure Engineering and automation Experience in AWS Cloud systems administration. Excellent communication skills and ability to thrive in both team-based and independent environments. What You Need to Get the Job Done Candidates should have a minimum of 8+ years industry experience in cloud and Infrastructure. Expertise in using DevOps tools Terraform, GitHub, Artifactory etc. Cloud engineering certifications (AWS, Terraform) are desirable. Deep understanding of networking and application architecture needs for system migrations. Proficiency in scripting languages: Python, PowerShell, Bash. Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution. Excellent verbal and written communication skills About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 315707 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India
Posted 2 months ago
8.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Lead Cloud/DevOps Engineer Noida, India; Gurgaon, India Information Technology 315707 Job Description About The Role: Grade Level (for internal use): 11 About the Role: We are looking for a Cloud/DevOps Engineer to join the KY3P team, to manage and automate custodian policies and systems administration in the AWS Cloud environment. The role offers extensive technical challenges in a highly dynamic and collaborative work environment. A passion for quality and a sense of pride in your work are an absolute must for the role. You will build solutions to migrate services, automate resource provisioning and administration of infrastructure in AWS Cloud for KY3P applications. What you'll work on: Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS using Terraform. Develop automation for provisioning compute instances and storage. Provision resources in AWS using Cloud Formation Templates and Orchestrate container deployment. Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with GitHub. Create comprehensive documentation and provide technical guidance. Effectively interact with global customers, business users and IT employees What we look for : B Tech./ M Tech / MCA degree in an IT/ Computer Science or related course is a prerequisite. 8+ years of hands-on professional experience in Infrastructure Engineering and automation Experience in AWS Cloud systems administration. Excellent communication skills and ability to thrive in both team-based and independent environments. What You Need to Get the Job Done Candidates should have a minimum of 8+ years industry experience in cloud and Infrastructure. Expertise in using DevOps tools Terraform, GitHub, Artifactory etc. Cloud engineering certifications (AWS, Terraform) are desirable. Deep understanding of networking and application architecture needs for system migrations. Proficiency in scripting languages: Python, PowerShell, Bash. Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution. Excellent verbal and written communication skills About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 315707 Posted On: 2025-05-27 Location: Noida, Uttar Pradesh, India
Posted 2 months ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
We are currently searching for a skilled Lead Python Developer to enhance our team. The selected candidate will be tasked with upgrading and securing our Python environment and managing and advancing Python tools and libraries on multiple platforms. Responsibilities Evaluate the present usage of Python within our organization Work together with Enterprise Architecture to define and record the extent of Python applications Develop plans to establish a contemporary, secure Python environment featuring packaging and deployment on platforms like Kubernetes, Windows, and Mac Collaborate with the Security team regarding Python environment deployment, configuration, and system scans Pursue a secure method for handling Python libraries Participate in the creation, evaluation, and modification of standards and documentation Analyze existing tools and incorporate security scanning tools such as Twistlock, Linting, Coverity, and BlackDuck into Python CI/CD processes Oversee Artifactory integration for Python versions and advance scanning approaches for malware in Python libraries Create processes and documentation to integrate new libraries into PyPi and new Python versions into the PyEnv repository Formulate policies to block Python library downloads from PyPi Oversee the open-source approval procedure for Python modules and libraries, including intake and assessment Requirements Minimum of 5 years of experience working in a similar role 1+ years of relevant leadership experience Proficiency in CI/CD with a strong background in Grafana and Splunk Expertise in Kubernetes and familiarity with PyPy Knowledge of Twistlock scanning, Linting, Coverity scanning, and SonarQube specific to Python code Experience with BlackDuck integration in CI/CD for Python Capability to manage Artifactory integrations and the process of updating Python libraries and versions Fluent English communication skills at a B2+ level Nice to have Background in Sonar Ability to utilize Twistlock We offer International projects with top brands Work with global teams of highly skilled, diverse peers Healthcare benefits Employee financial programs Paid time off and sick leave Upskilling, reskilling and certification courses Unlimited access to the LinkedIn Learning library and 22,000+ courses Global career opportunities Volunteer and community involvement opportunities Opportunity to join and participate in life of EPAM's Employee Resource Groups EPAM Employee Groups Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France