Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
70.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Credit Saison India: Established in 2019, CS India is one of the country’s fastest growing Non-Bank Financial Company (NBFC) lenders, with verticals in wholesale, direct lending and tech-enabled partnerships with Non-Bank Financial Companies (NBFCs) and fintechs. Its tech-enabled model coupled with underwriting capability facilitates lending at scale, meeting India’s huge gap for credit, especially with underserved and under penetrated segments of the population. Credit Saison India is committed to growing as a lender and evolving its offerings in India for the long-term for MSMEs, households, individuals and more. CS India is registered with the Reserve Bank of India (RBI) and has an AAA rating from CRISIL (a subsidiary of S&P Global) and CARE Ratings. Currently, CS India has a branch network of 45 physical offices, 1.2 million active loans, an AUM of over US$1.5B and an employee base of about 1,000 people. Credit Saison India (CS India) is part of Saison International, a global financial company with a mission to bring people, partners and technology together, creating resilient and innovative financial solutions for positive impact. Across its business arms of lending and corporate venture capital, Saison International is committed to being a transformative partner in creating opportunities and enabling the dreams of people. Based in Singapore, over 1,000 employees work across Saison’s global operations spanning Singapore, India, Indonesia, Thailand, Vietnam, Mexico, Brazil. Saison International is the international headquarters (IHQ) of Credit Saison Company Limited, founded in 1951 and one of Japan’s largest lending conglomerates with over 70 years of history and listed on the Tokyo Stock Exchange. The Company has evolved from a credit-card issuer to a diversified financial services provider across payments, leasing, finance, real estate and entertainment. Roles & Responsibilities: Define and drive the long-term AI engineering strategy and roadmap aligned with the company’s business goals and innovation vision, focusing on scalable AI and machine learning solutions including Generative AI. Lead, mentor, and grow a high-performing AI engineering team, fostering a culture of innovation, collaboration, and technical excellence. Collaborate closely with product, data science, infrastructure, and business teams to identify AI use cases, design end-to-end AI solutions, and integrate them seamlessly into products and platforms. Oversee the architecture, development, deployment, and continuous improvement of AI/ML models and systems, ensuring scalability, robustness, and real-time performance. Own the full AI/ML lifecycle including data strategy, model development, validation, deployment, monitoring, and retraining pipelines. Evaluate and incorporate state-of-the-art AI technologies, frameworks, and external AI services (e.g., APIs, pre-trained models) to accelerate delivery and enhance capabilities. Establish and enforce engineering standards, best practices, and observability tools (e.g., MLflow, Langsmith) for model governance, performance tracking, and compliance with data privacy and security requirements. Collaborate with infrastructure and DevOps teams to design and maintain cloud infrastructure optimized for AI workloads, including GPU acceleration and MLOps automation. Manage project timelines, resource allocation, and cross-team coordination to ensure timely delivery of AI initiatives. Stay abreast of emerging AI trends, research, and tools to continuously evolve the AI engineering function. Required Skills & Qualifications: 10 to 15 years of experience in AI, machine learning, or data engineering roles, with at least 8 years in leadership or managerial positions Bachelor’s, Master’s, or PhD degree from a top-tier college in Computer Science, Statistics, Mathematics, or related quantitative fields is strongly preferred. Proven experience leading AI engineering teams and delivering production-grade AI/ML systems at scale. Strong expertise in machine learning algorithms, deep learning, NLP, computer vision, and Generative AI technologies. Hands-on experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, Keras, Hugging Face Transformers, LangChain, MLflow, and related tools. Solid understanding of data engineering concepts, ETL pipelines, and working knowledge of distributed computing frameworks (Spark, Hadoop). Experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker). Familiarity with software engineering best practices including CI/CD, version control (Git), and microservices architecture. Strong problem-solving skills with a product-oriented mindset and ability to translate business needs into technical solutions. Excellent communication skills to collaborate effectively across technical and non-technical teams. Experience in AI governance, model monitoring, and compliance with data privacy/security standards. Preferred Qualifications: Experience building or managing ML platforms or MLOps pipelines. Knowledge of NoSQL databases (MongoDB, Cassandra) and real-time data processing. Prior exposure to AI in specific domains like banking, finance and credit experience is a strong plus. This role offers the opportunity to lead AI innovation at scale, shaping the future of AI-powered products and services in a fast-growing, technology-driven environment.
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Overview: We are looking for a skilled Linux Engineer to manage the deployment, configuration, and support of our software products in customer environments. The ideal candidate will have expertise in Linux system administration, troubleshooting, automation, and customer support to ensure smooth deployments and high availability of our solutions. Key Responsibilities: Deploy and configure software solutions on Linux-based environments (on-premise & cloud). Perform system administration tasks, including installation, configuration, upgrades, and security patching. Troubleshoot deployment issues, system failures, and network-related problems. Work closely with development and DevOps teams to optimize deployments. Write and maintain automation scripts using Bash, Python, or Ansible. Provide technical support to clients and internal teams, ensuring timely issue resolution. Monitor system performance, logs, and security vulnerabilities. Maintain technical documentation, deployment guides, and support procedures. Participate in on-call rotation for production support, as needed. Required Skills & Qualifications: Strong expertise in Linux administration (Ubuntu, CentOS, RHEL, etc.). Experience with shell scripting (Bash, Python, Ansible, or Terraform). Hands-on experience with server deployment, networking, and troubleshooting. Familiarity with containerization technologies (Docker, Kubernetes) is a plus. Knowledge of web servers (Apache, Nginx), databases (MySQL, PostgreSQL), and cloud platforms (AWS, Azure, GCP). Strong problem-solving skills and customer-oriented mindset. Excellent communication skills and ability to work independently or in a team. Preferred Qualifications: Certifications like RHCSA, RHCE, LFCS, AWS Certified SysOps Administrator are a plus. Experience in ITIL-based support environments. Location: Gurgaon Experience: 0-3 years of Experience Employment Type: Full-time Company Profile :Company Profile: TechBridge is the World’s leading Product & Solutions Company. Data Center Applications, Collaboration and Real Time Communication. DC Management and Monitoring, Disaster Management, Security, Collaboration and Cloud. Its marketleading Network Modernization, Unified Communications, Mobility and Embedded Communications solutions enable customers to quickly capitalize on growing market segments and introduce differentiating products, applications and services. We are an expert and leader in Government Solutions, Smart City Solutions, Data Centers and Large Enterprises. We do custom applications also, as per the customer requirements. ISO 27001, ISO 9001, CMMi L3 Certification. For more information visit www.tech-bridge.biz
Posted 5 days ago
9.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 11 The Team: As a Performance Test Engineer , you’ll be an integral part of the EDM Performance Testing Team. You will collaborate closely with product managers, developers, and fellow engineers to ensure the performance integrity of the system. We foster an open, inclusive environment where all perspectives are valued. Our team is focused on driving innovation, leveraging cutting-edge AI technologies, and maximizing engineering efficiency . We prioritize clean architecture, real-time performance, and data quality. What’s In It For You This is the place to utilize your existing Performance Testing/Engineering skills while being exposed to the latest cutting-edge technologies available in the market. You will have opportunities to provide Quality (Performance) gateways to build a next-generation product that consumers can rely on for their business decisions. Core Technical Qualifications Expertise in creating, enhancing (handling dynamic data and inputs), and executing scripts in JMeter or Gatling. Expertise in Performance Testing of REST APIs, Microservices and Containerized applications with test data creation methodologies Leverage IaC tools like Terraform, CloudFormation, or Ansible for test environment provisioning and configuration management. Familiarity with modern cloud platforms, particularly AWS or equivalent, with Docker and Kubernetes. Hands-on experience with scripting languages like Python and PowerShell and Version control tools like GIT/GitLab/Azure DevOps. Proficiency in developing and debugging queries in MS SQL/PostgreSQL. Expertise in at least one Application Performance Management (APM) tool like AppDynamics, New Relic, or Dynatrace and in Monitoring tools like Splunk/Grafana/Prometheus. Familiarity with at least one open-source application profiling tool. Demonstrated experience using AI-enhanced development tools (e.g., GitHub Copilot, Replit AI, ChatGPT, Amazon CodeWhisperer or any equivalent) to discover bugs, automate repetitive tasks, and speed up testing cycles. Comfortable applying AI/ML concepts (even at a basic level) to optimize workflows and test strategies, perform intelligent data analysis, or support decision-making within the product. Familiarity with prompt engineering, LLM-assisted testing, or using AI to automate documentation, code scans, or monitoring. Education & Experience Bachelor’s degree in computer science, Software Engineering, or a related field — or equivalent practical experience. 9-12 years of overall testing experience with deep expertise in performance testing frameworks, tools, and modern software testing practices Soft Skills Lead performance testing activities across multiple projects, ensuring timely and high-quality deliveries. Strong problem-solving skills with a growth mindset and openness to innovation. Excellent communication and cross-functional collaboration abilities. Capable of managing priorities and meeting deadlines in a fast-paced, continuously evolving environment. Additional Preferred Qualifications Strong problem-solving skills with a growth mindset and openness to AI-powered innovation. Excellent communication and cross-functional collaboration abilities. Capable of managing priorities and meeting deadlines in a fast-paced, continuously evolving environment. Collaborate with product managers, developers, and other QA team members to ensure test coverage and quality. Ability to handle performance testing for both front-end and back-end applications. Why Join Us? We're at the forefront of a technology transformation, adopting AI-first thinking across our engineering organization. You'll be empowered to push boundaries, embrace automation, and shape the future of performance testing in a hybrid human-AI environment. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 316891 Posted On: 2025-08-04 Location: Gurgaon, Haryana, India
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. We are currently seeking a talented and motivated developer with a strong desire to work on HANA Relational Engine and HANA Datalake. As a successful candidate for this role, you will have excellent problem-solving and troubleshooting skills, fluency in coding and systems design, solid communication skills, and a desire to solve complex problems of scale which are uniquely SAP. You will have the opportunity to contribute your expertise to HANA Relational Engine and HANA Datalake. What You’ll Do- Design, implement, document, and maintain various modules within HANA Relational Engine and HANA Datalake. Strive for continuous improvement, manage the product lifecycle, and collaborate with cross-functional teams to ensure a positive user experience. Own the long-term health and stability of HANA Relational Engine and HANA Datalake. Identify areas of improvement to the current design and advocate for alternative methods to enhance the current working set. Innovate, file patents, and generate IP for SAP. Provide alternative diagnostic methods to resolve both in-house and customer-reported problems. Design, debug, analyze, and resolve complex database engine problems for customers and SAP internal stakeholders. Own and work with an engineering team in different geographic locations to diagnose and resolve design issues and customer-reported problems. Interact with critical customers around the globe, through e-mail, calls, etc. and work towards resolving escalations. Articulate technical information clearly. Provide training and assist on knowledge transfer. Prioritize tasks, develop detailed designs and estimate the effort required to complete projects. Analyze the performance and scalability of HANA Relational Engine and HANA Datalake. What You Bring- B.Tech.or M.Tech. degree from a top-tier educational institute with 3-7 years’ work experience. Good knowledge of database architecture and possess analytical skills. Experience in designing, architecting, and developing scalable services utilizing micro-service architecture. Experience in distributed computing development, such as distributed database design, cluster file system etc., is a strong plus. Able to multi-task and work independently and take initiative to prioritize and resolve problems. Excellent verbal and communication skills. Tech you bring- You have a strong knowledge of C and C++ programming languages with some knowledge in database internals and/or operating system internals with strong debugging skills. Advanced LINUX and UNIX skills and experience (specifically with multi-threaded architecture, synchronization mechanisms, etc.) Strong understanding of cloud development environment, tools and languages, for example Kubernetes, Python, go. Some knowledge around aws, azure, gcp etc Tech you'll learn- The role provides opportunities to work on various challenging modules within HANA Relational Engine and HANA Datalake modules Meet your team- The HANA Relational Engine and HANA Datalake team encompasses global development and product management responsibilities across our portfolio covering SAP IQ, HANA Relation Engine and HANA Datalake. Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 418494 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .
Posted 5 days ago
0 years
0 Lacs
India
On-site
JD - Key Responsibilities: Design, implement, and maintain scalable CI/CD pipelines using Azure DevOps. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools such as ARM Templates, Bicep, or Terraform. Manage and monitor Azure cloud infrastructure including compute, storage, and networking resources. Ensure secure, reliable, and efficient deployment practices across environments. Integrate testing, security scanning, and code quality tools into the DevOps lifecycle. Collaborate with developers, QA, and IT teams to resolve build and deployment issues. Implement and manage configuration management tools like Ansible, Chef, or Puppet. Maintain documentation related to system configuration, processes, and deployment guides. Monitor system performance and provide production support as needed. Required Skills: Hands-on experience with Azure DevOps Services (Repos, Pipelines, Artifacts, Boards) . Strong understanding of CI/CD principles and experience in setting up automated pipelines. Experience with Azure Cloud Services (VMs, App Services, AKS, Azure Functions, etc.). Proficiency in scripting using PowerShell , Bash , or similar. Experience with Infrastructure as Code (IaC) tools – ARM Templates, Terraform, Bicep. Knowledge of containerization tools like Docker and orchestration using Kubernetes (AKS preferred). Familiarity with source control systems like Git . Understanding of DevSecOps principles and integrating security in CI/CD pipelines. Excellent troubleshooting and problem-solving skills. Preferred Qualifications: Azure Certifications such as AZ-400 (DevOps Engineer Expert) or AZ-104 . Experience with monitoring tools like Azure Monitor , Log Analytics , or App Insights . Familiarity with Agile and Scrum methodologies.
Posted 5 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Zeta Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our f lagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+employees - with over 70%roles in R&D - across locations in the US,EMEA, and Asia. We raised$280 million at a$1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter The Role As part of the Risk & Compliance team within the Engineering division at Zeta, the Application Security Manager is tasked with safeguarding all mobile, web applications, and APIs. This involves identifying vulnerabilities through testing and ethical hacking, while also educating developers and DevOps teams on how to resolve them. Your primary goal will be to ensure the security of Zeta's applications and platforms. As a manager, you'llbe responsible for securing all of Zeta’s products. In this individual contributor role, you will report directly to the Chief Information Security Officer (CISO). The role involves ensuring the security of web and mobile applications, APIs, and infrastructure by conducting regular VAPT. It requires providing expert guidance to developers on how to address and fix security vulnerabilities, along with performing code reviews to identify potential security issues. The role also includes actively participating in application design discussions to ensure security is integrated from the beginning and leading Threat Modeling exercises to identify potential threats. Additionally, the profile focuses on developing and promoting secure coding practices, educating developers and QA engineers on security standards for secure coding, data handling, network security, and encryption. The role also entails evaluating and integrating security testing tools like SAST, DAST, and SCA into the CI/CD pipeline to enhance continuous security integration. Responsibilities Guide Security and Privacy Initiatives: Actively participate in design reviews and threat modeling sessions to help shape the security and privacy approach for technology projects, ensuring security is embedded at all stages of application development. Ensure Secure Application Development: Collaborate with developers and product managers to ensure that applications are securely developed, hardened, and aligned with industry best practices. Project Scope Management: Define the scope for security initiatives, ensuring continuous adherence throughout each project phase, from initiation to sustenance/maintenance. Drive Internal Adoption and Visibility: Ensure that security projects are well-understood and adopted by internal stakeholders, fostering a culture of security awareness within the organization. Security Engineering Expertise: Serve as a technical expert and security champion within Zeta, providing guidance and expertise on security best practices across the organization. Team Leadership and Development Make decisions on hiring and lead the hiring process to build a skilled security team. Define and drive improvements in the hiring process to attract top security talent. Mentor and guide developers and QA teams on secure coding practices and security awareness. Security Tool and Gap Assessment: Continuously assess and recommend tools to address gaps in application security, ensuring the team is equipped with the best resources to identify and address vulnerabilities. Stakeholder Liaison: Collaborate with both internal and external stakeholders to ensure alignment on security requirements and deliverables, acting as the main point of contact for all security-related matters within the team. Bug Bounty Program Management: Evaluate and triage security bugs reported through the Bug Bounty program, working with relevant teams to address and resolve issues effectively. Own Security Posture: Take ownership of the security posture of various applications across the business units, ensuring that security best practices are consistently applied and maintained. Skills Hands-on experience in Vulnerability Assessment (VA) and Penetration Testing (PT) across web, mobile, API, and network/Infra environments. Deep understanding of the OWASP Top 10 and their respective attack and defense mechanisms. Strong exposure to Secure SDLC activities, Threat Modeling, and Secure Coding practices. Experience with both commercial and open-source security tools, including Burp Suite, AppScan, OWASP ZAP, BEEF, Metasploit, Qualys, Nipper, Nessus andSnyk. Expertise in identifying and exploiting business logic vulnerabilities. Solid understanding of cryptography, PKI-based systems, and TLS protocols. Proficiency in various AuthN/AuthZ frameworks (OIDC, OAuth, SAML) and the ability to read, write, and understand Java code. Experience with Static Analysis and Code Reviews using tools like Snyk,Fortify,Veracode, Checkmarx, and SonarQube. Hands-on experience in reverse engineering mobile apps and using tools like Dex2jar, ADB, Drozer, Clang, iMAS, and Frida/Objection for dynamic instrumentation. Experience conducting penetration tests and security assessments on internal/external networks, Windows/Linux environments, and cloud infrastructure (primarily AWS). Ability to identify and exploit security vulnerabilities and misconfigurations in Windows and Linux servers. Proficiency in shell scripting and automating tasks with tools such as Python or Ruby. Familiarity with PA-DSS, PCI SSF (S3, SSLC), and other security standards like PCI DSS, DPSC, ASVS and NIST. Understanding of Java frameworks like Spring Boot, CI/CD processes, and tools like Jenkins & Bitrise. In-depth knowledge of cloud infrastructure (AWS, Azure), including VPC/VNet, S3 buckets, IAM,Security Groups, blob stores, Load Balancers, Docker containers, and Kubernetes. Solid understanding of agile development practices. Active participation in bug bounty programs (HackerOne, Bug Crowd, etc.) and experience with hackathons and Capture the Flag (CTF) competitions. Knowledge of AWS/Azure services, including network configuration and security management. Experience with databases (PostgreSQL, Redshift, MySQL) and other data storage solutions like Elasticsearch and S3 buckets. Preferred Certifications: OSCP, OSWE, GWAPT, AWAE, AWS Certified Security Specialist, CompTIA Security+ Experience And Qualifications 12 to 18 years of overall experience in application security, with a strong background in identifying and mitigating vulnerabilities in software applications. A background in development and experience in the fintech sector is a plus. Bachelor of Technology (BE/ B.Tech ), M.Tech , or ME in Computer Science or an equivalent degree from an Engineering college/University. Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.
Posted 5 days ago
5.0 years
0 Lacs
India
Remote
Synapse is a digital transformation company. We provide end-to-end services right from Managed cloud operations to application development. We are a very unique technology group- a firm of highly passionate, driven team members who are constantly pushing the boundaries on innovative technology solutions to enterprise-level clients. We also have one of the world's automated multi-cloud deployment & monitoring systems, which can inject massive amounts of enterprise data for real-time threat monitoring & data visualization. We are looking for an experienced Java Full Stack Developer to join our team. This role requires hands-on expertise in Java (Spring Boot) for backend development and Angular for building responsive front-end applications. You will be responsible for full-cycle feature development—from design to deployment. Experience with MySQL or similar SQL databases is essential. Familiarity with AI-based development tools like GitHub Copilot is a strong plus. Job Role: Java Full-Stack Engineer Experience: 5+ years Work Location: Remote Responsibilities: Deliver end-to-end features from design, development, testing, deployment, to support. Build responsive and high-performance UIs using Angular (v10+), TypeScript, HTML5, and CSS3. Ensure cross-browser compatibility and seamless REST API integrations. Develop scalable back-end services and APIs using Java 8+ and Spring Boot. Design and implement a microservices architecture. Manage and optimize MySQL or other SQL-based databases. Write clean, maintainable code with thorough unit, integration, and E2E test coverage. Participate in code reviews and uphold coding standards. Work with CI/CD pipelines (Jenkins, GitHub Actions, etc.) for automated testing and deployment. Monitor app performance and provide production support. Collaborate with cross-functional teams, including product, design, and engineering. Stay updated on new technologies and recommend improvements to the stack. Key Skills: 5+ years of full-stack development experience. Experience using AI tooling like GitHub Copilot is required. Strong Angular (2+ years, ideally Angular 10+), TypeScript, HTML/CSS skills. Expert in Java (8+) and Spring Boot (Spring MVC, Spring Data, Security). Deep experience with MySQL and ORM frameworks like Hibernate/JPA. Solid understanding of microservices and REST API design. Familiarity with Git, Agile workflows, and CI/CD tools. Strong debugging, problem-solving, and communication skills. Preferred Skills: Exposure to AI coding tools (e.g., GitHub Copilot). Experience with AWS/GCP/Azure cloud services. Familiarity with Docker, Kubernetes, Kafka/RabbitMQ. Experience with test frameworks (JUnit, Mockito, Jest). Knowledge of monitoring/logging tools (ELK, Prometheus, Grafana). Interested candidates, share their resumes at g.punjabi@thesynapses.com or apply to this job post at the earliest.
Posted 5 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description - DevOps Engineer About BizAcuity Who are we? BizAcuity is on a mission to help enterprises get most out of their data by providing Business Intelligence and Data Analytics services, product development and consulting services for clients across globe in various domains / verticals. Established in 2011, by a strong leadership team and a team of 200+ engineers, we have made a mark as a world class service provider and compete with large service providers to win business. BizAcuity has developed and delivered high class enterprise solutions to many medium to large clients using modern and the best technologies in the data engineering and analytics world. Our services include - Business Intelligence Consulting , Advanced Analytics, Managed Services, Data Management, Cloud Services, Technology Consulting , Application Development and Product Engineering. For more information on BizAcuity, log on to - https://bizacuity.com/ Job Title: DevOps Engineer Location: [Hyderabad - Onsite-Work from Office-] Experience Level: 4 years Type: Full-Time Job Summary: We are seeking a skilled and proactive DevOps Engineer to join our team and drive automation, scalability, and reliability across our infrastructure and deployment pipelines. You will work closely with development, operations, and security teams to build, maintain, and monitor a robust DevOps toolchain including Docker, Kubernetes, Infrastructure as Code, CI/CD, and observability stacks. Key Responsibilities: Design, implement, and manage scalable CI/CD pipelines using GitHub Actions (or equivalent) Containerize applications using Docker and orchestrate deployments with Kubernetes Define and manage cloud infrastructure using Terraform or CloudFormation Implement secure and automated secrets management using AWS Secrets Manager or SSM Parameter Store Monitor application and infrastructure health using Prometheus and Grafana Improve system reliability through automation, proactive monitoring, and performance tuning Collaborate with development teams for smooth feature delivery and incident response Enforce infrastructure best practices (environment separation, GitOps, security policies) Required Skills & Experience: Solid understanding of DevOps principles and automation workflows Hands-on experience with Docker and Kubernetes in production or staging environments Proficiency with CI/CD tools such as GitHub Actions , Jenkins, GitLab CI, or similar Experience with Infrastructure as Code tools like Terraform , CloudFormation, or Pulumi Strong knowledge of Linux-based systems and shell scripting Monitoring and alerting experience using Prometheus , Grafana , and logging tools (e.g., ELK/EFK stack) Familiarity with version control systems ( Git , GitHub, GitLab) Preferred Qualifications: Experience with AWS services such as EC2, RDS, EKS, S3, IAM, and Secrets Manager Understanding of secure deployment practices and role-based access controls (RBAC, IRSA) Exposure to service mesh or ingress controllers (e.g., Istio, NGINX Ingress) Familiarity with container image scanning, policy enforcement, and vulnerability remediation
Posted 5 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: AWS DevOps + Cloud Engineer Experience: 5–8 Years Location: Hyderabad/Chennai. Employment Type: Hybrid. Job Summary: We are seeking a skilled and experienced AWS DevOps + Cloud Engineer to join our team. The ideal candidate will have hands-on experience in managing cloud infrastructure, implementing CI/CD pipelines using Jenkins, and working with AWS services like EC2 and ECR. You will play a key role in automating deployments, optimizing cloud resources, and ensuring high availability and scalability of applications. Key Responsibilities: Design, implement, and manage scalable, secure, and highly available AWS infrastructure. Configure and maintain EC2 instances, ECR repositories, and other AWS services. Develop and maintain CI/CD pipelines using Jenkins for automated deployments. Monitor system performance and troubleshoot issues across cloud environments. Collaborate with development and QA teams to streamline release processes. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Ensure compliance with security best practices and policies. Optimize cost and performance of cloud resources. Required Skills: 5–8 years of experience in DevOps and Cloud Engineering. Strong expertise in AWS services, especially EC2, ECR, IAM, VPC, S3, CloudWatch. Proficiency in CI/CD tools, particularly Jenkins. Experience with containerization (Docker) and orchestration tools (Kubernetes is a plus). Familiarity with scripting languages (Python, Bash, etc.). Knowledge of infrastructure as code (Terraform, CloudFormation). Good understanding of networking, security, and monitoring in cloud environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certification. Experience with Git, GitHub/GitLab, and version control best practices. Exposure to Agile/Scrum methodologies. Strong problem-solving and communication skills.
Posted 5 days ago
1.0 years
0 Lacs
India
On-site
About Us We are building robust infrastructure within the Ethereum ecosystem with a focus on Proof of Stake (PoS) and secure private network deployments. We’re looking for a highly motivated self-starter who can work independently and grow into a senior blockchain engineer with us. What We Need We are looking for a developer who has core skills in C, C++, Python, GCP, and AWS infrastructure, and who is eager to learn and grow in the blockchain space. We will train the candidate in the nuances of Ethereum node operations and Proof of Stake consensus mechanisms, as well as specific blockchain security fundamentals and staking solutions. Key Skills You Will Be Trained In Understanding how various components of an Ethereum client work, including networking, synchronization, RPC, and consensus layers. Working with Ethereum node infrastructure and Proof of Stake consensus. Blockchain security fundamentals, including key management, access control, and DoS protection. Deploying and maintaining blockchain environments in cloud infrastructure (AWS, GCP). Working with containerization technologies to scale and manage blockchain networks. What We Expect Core Skills: Proficiency in C, C++, Python, GCP, and AWS infrastructure. Highly Motivated Self-Starter: Ability to take initiative and work independently, with minimal supervision. Problem Solver: Willingness to troubleshoot and resolve issues related to Ethereum nodes and infrastructure. Growth Mindset: Eagerness to learn new technologies and protocols while contributing effectively to our infrastructure goals. Responsibilities Configure, manage, and maintain Ethereum nodes (e.g., Besu, Geth, Lighthouse, Prysm) across public and private networks. Design, deploy, and monitor secure PoS staking infrastructure. Troubleshoot networking, consensus, and node operation issues. Support internal development teams by providing a reliable blockchain backend. Ensure security best practices are followed in node operations and deployment. Assist in automating infrastructure using modern DevOps tooling and scripting. Required Qualifications 1-2+ years of experience in a blockchain infrastructure or Ethereum-focused role. Proven understanding of PoS consensus and Ethereum node operations. Experience with private Ethereum networks (e.g., configuring genesis files, bootnodes, enode management). Proficiency in Linux environments, scripting (Bash/Python), and using cloud platforms (AWS, GCP). Familiarity with networking concepts, TLS, firewalls, VPNs, etc. Strong understanding of blockchain security principles. Nice to Have Experience with Hyperledger Besu or other enterprise Ethereum clients. Familiarity with Kubernetes, Docker, or infrastructure-as-code tools (Terraform, Helm). Prior work on staking infrastructure or validator management. Important We are looking for candidates who are ready to contribute from day one and are motivated to develop their skills further. This is not an experimental role, and we are committed to training and growing the right person into a senior blockchain engineer. If you have a strong foundation in C, C++, Python, GCP, and AWS, and are passionate about blockchain technology, we would love to hear from you.
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary Resource is responsible for assisting MetLife Docker Container support of Application Development Teams. In this position, resources will be supporting MetLife applications in an operational role performing on boarding applications, troubleshooting infra and Applications Container issues. Automate any of the manual build process using CI/CD pipeline. Job Responsibilities Development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in workload migration from Docker to OpenShift platform Manage the container platform ecosystem (installation, upgrade, patching, monitoring) Check and apply critical patches in OpenShift/Kubernetes Troubleshoot issues in OpenShift Clusters Knowledge of CI/CD methodology and tooling (Jenkins, Harness) Experience with system configuration tools including Ansible, Chef Cluster maintenance and administration experience on OpenShift and Kubernetes Strong Knowledge & Experience in RHEL Linux Manage OpenShift Management Components and Tenants Participates as part of a technical team responsible for the overall support and management of the OpenShift Container Platform. Learn new technologies based on demand. Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor’s Degree in Computer Science , Information Systems, or related field Experience 3+ years of total experience and at least 2+ years of experience in development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in installation, upgrade, patching, monitoring of container platform ecosystem Linux Administration Software Defined Networking (Fundamentals) Container Runtimes (Podman / Docker), Kubernetes (OpenShift) / Swarm Orchestration, GoLang framework and Microservices Architecture Knowledge and usage of Observability tools (i.e. Elastic, Grafana, Prometheus, OTEL collectors, Splunk ) Apache Administration Reliability Mgmt. / Troubleshooting Collaboration & Communication SkillsContinuous Integration / Continuous Delivery (CI/CD) Experience in creating change tickets and working on tasks in Service Now Good to Have : Automation Platforms: Specifically, Ansible (roles / collections) SAFe DevOps Scaled Agile Methodology Scripting: Python, Bash Serialization Language: YAML, JSON Knowledge and usage of CI/CD Tools (i.e.: AzDO, ArgoCD) Java Mgmt. (JMX)/ NodeJS management About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 5 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority. Preferred Education Master's Degree Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks. Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc.
Posted 5 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Strong Java FSD development experience Experience in Core Java & EE, Spring Boot, Spring MVC, Spring Security CI/CD tools & process (GIT/Bitbucket, Maven, Gradle, Jenkins, Sonar), scripting languages Bash, PowerShell Knowledge with React or Angular Knowledge of Microservices & Layered (SOA/MVC) Architecture Build Rest APIs using Java Spring Boot and Spring MVC. Experience writing enterprise level REST/SOAP services with Java Spring MVC Relational databases Oracle, PostgreSQL, MySQL etc. AWS or similar private/public cloud platform Experience in creating and maintaining containers with Docker and Kubernetes Knowledge with Terraform Strong OOP concepts Familiar with messaging frameworks like MQ and Kafka Strong knowledge of OOP concepts, design patterns, and continuous delivery principles Knowledge of terraform is a preferred Knowledge of JPMC infra, frameworks like photon is preferred OIDC, OAuth 2 0 implementation knowledge is a plus
Posted 5 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Strong Java FSD development experience Experience in Core Java & EE, Spring Boot, Spring MVC, Spring Security CI/CD tools & process (GIT/Bitbucket, Maven, Gradle, Jenkins, Sonar), scripting languages Bash, PowerShell Knowledge with React or Angular Knowledge of Microservices & Layered (SOA/MVC) Architecture Build Rest APIs using Java Spring Boot and Spring MVC. Experience writing enterprise level REST/SOAP services with Java Spring MVC Relational databases Oracle, PostgreSQL, MySQL etc. AWS or similar private/public cloud platform Experience in creating and maintaining containers with Docker and Kubernetes Knowledge with Terraform Strong OOP concepts Familiar with messaging frameworks like MQ and Kafka Strong knowledge of OOP concepts, design patterns, and continuous delivery principles Knowledge of terraform is a preferred Knowledge of JPMC infra, frameworks like photon is preferred OIDC, OAuth 2 0 implementation knowledge is a plus
Posted 5 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Experience preferred: 5+ Years Language: Must have expert knowledge of either Go or Java and have some knowledge of two others. Go Java Python C programming & Golang(Basic knowledge) Infra: Brokers: Must have some experience and preferably mastery in at least one product. We use RabbitMQ and MQTT (Mosquitto). Prefer experience with edge deployments of brokers because the design perspective is different when it comes to persistence, hardware, and telemetry Linux Shell/Scripting Docker Kubernetes k8s – Prefer experience with Edge deployments, must have some mastery in this area or in Docker K3s (nice-to-have) Tooling: Gitlab CI/CD Automation Dashboard building – In any system, someone who can take raw data and make something presentable and usable for production support Nice to have: Ansible Terraform Responsibilities: KTLO activities for existing RabbitMQ and MQTT instances including annual PCI, patching and upgrades, monitoring library upgrades of applications, production support, etc. Project work for RabbitMQ and MQTT instances including: Library enhancements - In multiple languages Security enhancements – Right now, we are setting up the hardened cluster including all of the security requested changes - Telemetry, monitoring, dashboarding, reporting. Skills Java,Devops,Rabbitmq
Posted 5 days ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: IT Engineer Location: Indore Job Type: Full-Time Experience: 3+ years Department: IT & Infrastructure Job Summary: We are seeking a highly skilled and versatile IT Engineer to manage and maintain our infrastructure systems and enterprise applications. The ideal candidate will have hands-on experience with cloud platforms (AWS), microservices architecture, system management tools, and both Linux and Windows environments. This role requires deep technical knowledge, excellent problem-solving skills, and the ability to handle system-level responsibilities, backups, and security policies effectively. Key Responsibilities: Cloud & DevOps: Manage and maintain AWS infrastructure including EC2, S3, IAM, RDS, and networking. Deploy and manage microservices using Docker and Kubernetes. Handle container orchestration and automation scripts (CI/CD pipelines). Set up and manage Nginx and Apache2 servers. System & Server Management: Oversee installation, configuration, and support of Windows and Ubuntu systems. Maintain and monitor local servers and ensure high availability. Implement security updates, patches, and hardening procedures. Handle system inventory and hardware/software lifecycle management. Backup & Recovery: Manage automated data and database backup solutions. Perform regular backup tests and ensure disaster recovery plans are in place. Employee Support & Policy Management: Define and implement system usage policies for employees. Provide support for user access, system issues, and software installations. Manage user onboarding/offboarding in IT systems. Monitoring & Optimization: Monitor server performance and troubleshoot issues proactively. Implement tools for system and network health checks. Ensure optimal performance of infrastructure and services. Required Skills & Qualifications: Proven experience with AWS and cloud infrastructure management. Strong knowledge of Docker, Kubernetes, and microservices architecture. Expertise in server configuration: Apache2, Nginx. Hands-on experience with both Windows and Ubuntu OS environments. Solid understanding of networking, firewalls, VPNs, DNS, and system security. Familiarity with database backup and restoration (MySQL/PostgreSQL/MongoDB). Experience with system inventory and asset tracking tools. Good documentation and policy drafting abilities. Excellent problem-solving and time management skills. Preferred Qualifications: AWS Certified Solutions Architect or equivalent certification. Experience with tools like Ansible, Terraform, Jenkins, Git. Prior experience in managing internal employee systems or ERP.
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary Resource is responsible for assisting MetLife Docker Container support of Application Development Teams. In this position the resource will be supporting MetLife applications in an operational role performing on boarding applications, troubleshooting infra and Applications Container issues. Automate any of the manual build process using CI/CD pipeline. Job Responsibilities Development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in workload migration from Docker to OpenShift platform Manage the container platform ecosystem (installation, upgrade, patching, monitoring) Check and apply critical patches in OpenShift/Kubernetes Troubleshoot issues in OpenShift Clusters Experience in OpenShift implementation, administration and support Working experience in OpenShift and Docker/K8s Knowledge of CI/CD methodology and tooling (Jenkins, Harness) Experience with system configuration tools including Ansible, Chef Cluster maintenance and administration experience on OpenShift and Kubernetes Strong Knowledge & Experience in RHEL Linux Manage OpenShift Management Components and Tenants Participates as part of a technical team responsible for the overall support and management of the OpenShift Container Platform. Learn new technologies based on demand. Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field Experience 7+ years of total experience and at least 4+ years of experience in development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in installation, upgrade, patching, monitoring of container platform ecosystem Experience in workload migration from Docker to OpenShift platform. Good knowledge of CI/CD methodology and tooling (Jenkins, Harness) Linux Administration Software Defined Networking (Fundamentals) Container Runtimes (Podman / Docker), Kubernetes (OpenShift) / Swarm Orchestration, GoLang framework and Microservices Architecture Knowledge and usage of Observability tools (i.e. Elastic, Grafana, Prometheus, OTEL collectors, Splunk ) Apache Administration Automation Platforms: Specifically, Ansible (roles / collections) SAFe DevOps Scaled Agile Methodology Scripting: Python, Bash Serialization Language: YAML, JSON Knowledge and usage of CI/CD Tools (i.e.: AzDO, ArgoCD) Reliability Mgmt. / Troubleshooting Collaboration & Communication SkillsContinuous Integration / Continuous Delivery (CI/CD) Experience in creating change tickets and working on tasks in Service Now Java Mgmt. (JMX)/ NodeJS management About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 5 days ago
6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description for SAT : · Perform Vulnerability assessment & Policy Compliance using leading Vulnerability Scanning solutions like Qualys etc. · Perform Vulnerability assessments & Policy Compliance on On-prem, Cloud hosted systems, container (like Docker & Kubernetes), databases, web services and other widely deployed infrastructure components. · Perform false positive validation and ensure delivery of quality reports. · Act as a technical SME to analyse the vulnerability results & detection logic. · Provide technical advice and support on remediation to infrastructure / application support teams. · Review findings and identify root causes for common issues and provide recommendations for sustainable improvements. · Responsible to maintain vulnerability quality assurance by building VM team technical knowledge base. · Research and report on security vulnerabilities and latest advancements in the vulnerability management lifecycle. · Understand security policies, procedures and guidelines to all levels of management and staff. · Communicate effectively orally and in writing and establish cooperative working relationships. · Provide suggestion to improve vulnerability Management service based on current trends in information technology (Network, system security software and hardware). · Act as line manager in the absence of team lead. Required skills : · Minimum 6 years of experience in Information security and preferably in Banking and Financial services sector · In-depth working experience on Cloud technologies, routers, switches, firewalls, load balancers and proxy will be added advantage for the role. · Bachelor Degree in Engineering, Computer Science/Information Technology or its equivalent. · Industry certifications will be a plus e.g. CISSP, CCNA Security, CCIE, CCNP Security, CISA, CRISC and CISM.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? Serve as a top-performing engineer, excelling in the design, development, and testing of high-quality software. Ensure all projects meet specified functional and non-functional requirements within given time and resource constraints. How will you make an impact? Collaborate in Design Process: Work with senior software engineers, architects, and managers to design software products and services. Contribute to implementation planning and estimation. Experience - 5 to 7 yrs Communicate Software Designs: Convey software designs to other engineering staff through code, textual, and pictorial documentation. Interface with various groups within and outside R&D as needed. End-to-End Implementation and Support: Lead by example to ensure comprehensive quality coverage and high responsiveness to issues throughout the software lifecycle. Maintain Design and Quality Standards: Ensure design and quality standards are met through regular code reviews and testing. Mentor and coach peers and junior engineers, promoting best practices and software craftsmanship. Contribute Quality Code: Personally contribute significant volumes of high-quality code, ensuring regular releases and deployments alongside colleagues. Lead Scrum Team: Lead a scrum team of developers and QA engineers to meet roadmap commitments effectively. Have you got what it takes? Java Development: Experience in Java programming, including data structures, threading, OOP, design patterns, functional programming, and memory optimization. Spring Framework: Proficient in using Spring and Spring Boot for web applications or web services. Experience with Spring Security/Batch and security technologies like SAML, OAuth, and JWT is a plus. Messaging and API: Familiarity with JMS/Kafka and API Gateway/reverse proxy technologies. RESTful APIs and Microservices: Hands-on experience with RESTful API development and microservice architecture. JavaScript Development: Proficiency in JavaScript programming. Frontend Frameworks: Experience with ReactJS (or similar frameworks) for complex pages, authentication/authorization, and state management. Web Technologies: Proficiency in HTML5, CSS3, and Responsive Web Design, including grids, layouts, and offline storage. Database Management: Experienced with MySQL/Postgres and MongoDB, including schema definition, query performance tuning, and ORM. SaaS Solutions: Experience in developing scalable multi-tenant SaaS-based solutions. Cloud Technologies: Familiarity with public cloud infrastructure and technologies such as AWS, Google Cloud Engine, or Azure. CI/CD Practices: Experience with Continuous Integration and Delivery using Jenkins, Docker, Kubernetes, and Artifactory. Agile Methodology: Experience working in an Agile development environment and using work item management tools like JIRA. You will have an advantage if you also have : Analytical Skills: Strong analytical and problem-solving abilities. Communication and Collaboration: Excellent communication and collaboration skills. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8106 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Notice period 30 days max Do you want to join a team that blazes a trail for technology modernisation in banking? Do you aspire to work with colleagues who productionise microservice architecture concepts into enterprise scale solutions? Can you drive modernisation and reshape traditional platforms and systems? Our team is undertaking an ambitious programme of wholesale platform renewal & product evolution underpinned by genuine digitalisation. We are looking for a talented engineer to act as a technical expert for our fledgling team in Pune, India. If you are motivated to build our next generation Post-Trade & Custody ecosystem based on services and integration in modern programming paradigms, we look forward to receiving your application. Responsibilities: You will work in a highly collaborative environment with colleagues from globally diverse backgrounds and skillsets coming together to solve challenging problems as a team. The position is within our Trade and Transfer Assets stream closely collaborating with the part of Agile delivery units distributed across the globe. Our teams design, deliver and operate state-of-the-art financial systems that offer best-in-class services to the bank's clients. Our development pods work across a multitude of development languages working together in a model of coexistence whilst we transform, modernize, and evolve our post-trade service platform. We are genuine believers that diversity brings more varied experience, expertise, and working methods to improve the way we engineer and deliver solutions. Our Stack: • CI/CD: Gitlab • Cloud: Azure • Platform: Linux, Windows, Mainframe • Configuration management: Ansible • Scripts: Bash • Programming languages: Java, .NET, Kafka, Kotlin • Container-based architecture and deployments (Kubernetes) • DB: Postgres Mandatory Skills Description: Azure, Kubernetes, Docker, Networking, Ansible, GitLab, CI/CD, Shell Scripting We are looking for a DevOps with Azure Engineer who is passionate and ready to develop the state-of-art technology solutions for digital platforms. This job will have a variety of challenges on a daily basis, where you will need to understand the business needs and based on your creative thinking ways, you will develop and design solutions in Azure with Kafka and DevOps capabilities. You will implement them according to the DevOps practices. This job is for someone who is excited and happy to work with cutting-edge technologies and ideally motivated to work with huge amounts of complex cloud-based data. To apply for this job as an Azure Cloud Developer with Kafka, you should bring in: • 8-12 Years Experience • Significant experience in designing and developing Azure (with Kubernetes) solutions • Strong knowledge and experience of working with Kafka • Comfortable working with large amounts of data • Knowledge of technologies such as Docker and Kubernetes • DevOps skills are also essential • Good Postgres DB knowledge • Microsoft Azure experience and certification is a plus Nice-to-Have Skills Description: A positive attitude & willingness to learn & desire to improve the environment around you • Knowledge of virtualization and containerization • Track record as an engineer working in a globally distributed team • On-the-job examples of working in a fast-paced Agile environment Languages: English: C2 Proficient
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Manager Job Description & Summary At PwC, our people in integration and platform architecture focus on designing and implementing seamless integration solutions and robust platform architectures for clients. They enable efficient data flow and optimise technology infrastructure for enhanced business performance. Those in solution architecture at PwC will design and implement innovative technology solutions to meet clients' business needs. You will leverage your experience in analysing requirements, developing technical designs to enable the successful delivery of solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities We are seeking a highly skilled Architect/Lead with deep expertise in Chaos Engineering and Automated Root Cause Analysis (RCA) to drive the resilience, observability, and reliability strategy of our customers. You will be responsible for designing, implementing, and evangelizing chaos practices and automated diagnostics across diverse platforms, ensuring that customers’ applications and systems can withstand and recover gracefully from unexpected failures. Key experience requirements (must have): a) Demonstrable experience in preparing chaos engineering strategy, architecture and roadmap for clients. b) Demonstrable experience in implementing chaos engineering solutions for the clients c) Demonstrable experience in microservices development; demonstrable experience in programming using Python, C#, Java d) Demonstrable experience in cloud platforms – monitoring, log analysis, DevOps – Azure preferred e) Demonstrable experience on one chaos engineering tool – Gremlin, Harness, etc. Key Responsibilities: Define and own the Chaos Engineering strategy, architecture, and roadmap. Architect automated RCA systems leveraging observability platforms, AI/ML techniques, and event correlation tools. Develop resilience patterns and chaos experimentation frameworks integrated into the SDLC. Design and orchestrate controlled fault injection experiments to validate system robustness (e.g., latency injection, dependency failures, resource exhaustion). Evaluate and deploy Chaos Engineering tools (such as Harness, Gremlin, Chaos Mesh, LitmusChaos, Simian Army, etc.) tailored to cloud-native, hybrid, and legacy environments. Establish guardrails, blast radius controls, and automated rollback procedures for experiments. Architect solutions to automatically detect, triage, and pinpoint root causes for production incidents. Integrate logs, metrics, traces, and events across monitoring tools (Datadog, New Relic, Splunk, ELK, Prometheus) for correlation and insights. Develop or integrate ML models and rules engines to accelerate RCA and reduce MTTR. Define policies, processes, and success criteria for chaos experiments and RCA automation. Create reusable playbooks, runbooks, and knowledge artifacts. Mentor engineering teams on resilience and reliability engineering. Partner with SRE, Platform Engineering, Application Development, and Security teams. Champion a culture of proactive failure testing and continuous improvement. Key Skills & Qualification: Deep understanding of distributed systems, microservices, and cloud-native architectures (Kubernetes, Microsoft Azure). Strong knowledge of observability pillars (logging, monitoring, tracing). Hands-on experience with Azure Monitor, Azure Log Analytics, Azure Application Insights, Azure Sentinel, Azure Automation, Logic Apps, Azure Machine Learning, Azure Data Explorer Hands-on experience with Chaos Engineering tools and fault injection practices (Azure Chaos Studio, harness, Gremlin) Familiarity with AIOps and intelligent RCA frameworks. Proficiency in scripting/programming languages (Python, Go, Java). Experience automating experiments and RCA workflows via pipelines (GitHub, GitHub Actions, Azure DevOps). Strong analytical mindset for dissecting failures and correlating signals across multiple systems. Excellent communication and influencing skills. Proven ability to lead cross-functional initiatives and drive cultural change. Mandatory Skill Sets Chaos Engineering and DevSecOps Preferred Skill Sets Site Reliability experience and Experience automating experiments and RCA workflows via pipelines (GitHub, GitHub Actions, Azure DevOps). Years Of Experience Required 8-10 Years Education Qualification B.E./B.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chaos Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Amazon Web Services (AWS), Analytical Thinking, Architectural Engineering, Brainstorm Facilitation, Business Impact Analysis (BIA), Business Process Modeling, Business Requirements Analysis, Business Systems, Business Value Analysis, Cloud Strategy, Coaching and Feedback, Communication, Competitive Advantage, Competitive Analysis, Conducting Research, Creativity, Embracing Change, Emotional Regulation, Empathy, Enterprise Architecture, Enterprise Integration, Evidence-Based Practice (EBP) {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About The Job At Red Hat the Global Customer Service team offers an inclusive and collaborative environment where professionals work together to build solutions for data quality, governance, and knowledge management. Red Hat is the place for you to apply your skills in data quality, AI, and project management while nurturing your leadership capabilities. The Global Customer Service team is looking for a Project Manager – Technical: Customer Service Knowledge Domain Expert to join us in Pune, India. In this role, you will report to the Global lead of customer service and work on shaping the Red Hat Global Customer Service team’s data quality and governance strategies, collaborating with AI experts, data scientists, engineering, and business stakeholders. You’ll be responsible for defining standards, implementing quality programs, and driving continuous improvement across our customer data and knowledge systems. You will ensure our data and processes meet the highest standards of accuracy, security, searchability and usability, enabling effective self solve solutions and decision-making. As a Project Manager in this team, you will gain deep insights into AI applications, data governance practices, and enterprise-level customer service operations. In this role, you will have the opportunity to showcase your leadership skills, drive impactful solutions to complex challenges, and contribute meaningfully to Red Hat’s success while gaining broad visibility across the organization. What will you do? Contribute to defining, evolving and collaboratively executing the data quality and governance strategy for customer data, knowledge bases, and support records Lead the implementation of data quality standards, metrics (KPIs), validation routines, and feedback loops across knowledge assets and customer interaction records Collaborate closely with AI model developers, engineers, and business stakeholders to align data quality efforts with AI initiatives and product improvements Guide process design for continuous data quality monitoring and implement automated validation tools and best practices Champion a culture of data quality and governance, conducting training and communication programs to drive awareness and adoption Evaluate and recommend data quality tools and technologies, including KCS V6 practices and AI-powered solutions Develop domain-specific quality programs focused on Knowledge Management, support case quality, and voice of customer insights Act as a prompt engineer for AI-assisted support tools, ensuring accuracy, efficiency, and security in AI-driven customer interactions Monitor and report on data quality metrics, perform root cause analyses, and drive corrective and preventive actions across teams What will you bring? Bachelor’s degree in Data Science, Computer Science, Information Systems, Business Analytics, or a related field 8-10 years of experience in data quality, governance, or data management, with at least 2 years in a leadership or project management role Experience implementing enterprise-level data quality strategies, ideally supporting AI/ML initiatives Familiarity with knowledge management systems (e.g., CMS platforms) and CRM tools like Salesforce Service Cloud Strong understanding of data lifecycle management, profiling, cleansing, validation, and quality dimensions Excellent communication and stakeholder management skills, with the ability to influence and align cross-functional teams Solid project management skills and the ability to handle multiple initiatives in a fast-paced environment Conceptual understanding of AI/ML and their reliance on high-quality data Passion for continuous learning and driving data-driven improvements The Following Are Considered a Plus Certifications in Data Governance, Data Quality, or Project Management methodologies KCS V6 Certification Experience with natural language processing (NLP) applications and challenges in unstructured data quality Familiarity with responsible AI practices and data ethics principles Familiarity with industry best practices for data ethics and responsible AI #customerservice #projectmanagement #KCSV6 #Knowledgedomainexpert #Globalteam About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overall Objectives of Job Administration of One Identity tool and management of integrated Identities and Services. Devops Engineer with expertise in Kubernetes, Docker, Azure, AWS, Deployment VMware Management of cloud and on-perm infrastructures hosting IAM. Working knowledge on One identity tools : 1IM Manager / Object Browser / Job Queue / Synchronization editor Understanding of the whole IAM environment, Active Directory Multi forest environment at an enterprise level, Windows OS, IIS, MS SQL server Monitor, Report and Analysis of bugs during and after IAM release versions. Performance management of IAM tools, database and Infrastructure. Administration of Identities and Services integrated with the One IDM tool. Support for Organization integration with the IAM Infra. Collaborate and work with onshore development and project team to provide solutions and assist during Project release, testing and for operational support. Responsible for management of incident, problem and change within the IAM Infrastructure. Responsible for documentation and update of IAM Processes and operating procedures. Work with Software Development tool (e.g., JIRA) and handle various IAM related tasks. Technical • Experience in One Identity tool (preferred) operations or similar IAM tools. • Devops Engineer with expertise in Kubernetes, Docker, Azure, AWS, Deployment Vmware • Knowledge in DevOps tools of Github/Azure Kubernetes/pipeline deployment. • Knowledge of Jenkins Automation tool , IAAS , Infrastructure background. • Hands-on experience • Knowledge in DNS, TCP/IP, network technologies • Knowledge in MS-SQL (single and cluster configuration) – database technologies. • Knowledge of incident, problem, change process handling
Posted 5 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. What’s the role all about? As a Senior Automation Engineer , you will be responsible for designing, developing, and maintaining robust automation solutions also manual verification across backend services, APIs, and database layers. This role is critical to ensuring high-quality delivery of our financial market compliance applications. You will take a role in implementing scalable test automation frameworks as well as manual test verification, promoting best practices, and validating complex data processes across distributed systems. How will you make an impact? Design, build, and maintain scalable test automation frameworks for database, API, and UI validation using Java, Selenium, and modern test tools. Perform advanced SQL-based validation across PostgreSQL and MSSQL databases to ensure data accuracy and integrity. Drive test strategy and automation planning across backend modules in fast-paced Agile teams. Mentor junior QA engineers and set quality engineering standards for the team. Collaborate with cross-functional teams—including developers, DevOps, and product owners—to define, review, and verify technical solutions. Contribute to root cause analysis, support defect triage, and uphold regulatory compliance standards. Have you got what it takes? Bachelor's or Master’s degree in Computer Science, Engineering, or related field. 4–7 years of experience in test automation, with expertise in backend and database validation. Strong programming skills in Java and experience with test frameworks like TestNG, JUnit, or Selenium. Proficiency in writing complex SQL queries; hands-on with PostgreSQL and MSSQL. Experience in Agile environments and test methodologies. Excellent troubleshooting, debugging, and analytical skills. You will have an advantage if you also have: Experience with API test automation tools such as RestAssured, Postman, or similar. Exposure to CI/CD tools (e.g., Jenkins, Git, Maven, Docker). Familiarity with test reporting solutions like Jira, ExtentReports, or TestRail. Background in financial services, regulatory systems, or compliance platforms. Experience with cloud platforms (AWS) and containerization using Docker/Kubernetes. What’s in it for you? Join a fast-growing, industry-leading global organization where quality and innovation are at the heart of everything we do. Collaborate with top talent across disciplines and domains, work on meaningful challenges, and drive continuous improvement in critical financial technology platforms. This is your opportunity to grow with us and shape the future of quality assurance in the compliance space. Enjoy NICE-FLEX! We work in a hybrid model designed to give you the best of both worlds — 2 days in the office for collaboration and innovation, and 3 days remote for focused, flexible work. About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Nice to meet you! We’re a leader in data and AI. Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers. We’re also a debt-free multi-billion-dollar organization on our path to IPO-readiness. If you're looking for a dynamic, fulfilling career coupled with flexibility and world-class employee experience, you'll find it here. About The Job The IaaS Design Engineering team is looking for a Senior Azure Cloud Engineer to help design, build and maintain the next generation of hosted Azure SAS Cloud. Our team owns IaaS technology domains, sets standards, develops new solutions, and much more for hosted SAS environments, corporate/internal teams and R&D. As An Azure Cloud Engineer, You Will Advise teams on technical designs by providing supporting justification (and when necessary prototypes) for architectural and design choices. Assist in communicating these designs to promote understanding with stakeholders and people of various technical backgrounds Interface with the “customer” to gather functional level requirements for a solution. Works with customers to define the activities and challenges they face, to ensure the solution created solves their problems. Informs customers of current capabilities, what is possible, and what isn't. Think analytically, write/edit technical material, and present clearly on technical matters in a business context. Exercise your ability to adapt and react to complex issues in a timely manner as our business needs and objectives evolve. Work with project teams and architects to develop and maintain architectural designs for applications/platforms/services. Ensure the designs are cost-effective, meet user requirements and meet agreed upon service level agreements. Contribute to technical portfolio and technology lifecycle management initiatives by working with teams and architects to assess and improve technical health, cost, and business value of CIS applications or services. Optimize system performance, scalability, and reliability through architecture enhancements and best practices. Experience: 5-8 Years. Required Qualifications A bachelor’s degree in computer science or a related quantitative field Azure Public Cloud expertise with at least 5+ years of experience in supporting enterprise solutions/services in a public cloud. 2+ years of experience in systems support and consulting. 2+ years of experience in Linux. 1+ year of experience in deploying and managing Kubernetes. In-depth experience in all major aspects of Information Technology (servers/compute, network, storage). Experience in deploying and managing containerized services. Development experience in a major scripting or compiled programming languages. Experience using source control solutions like GitHub or GitLab. Azure Solutions Architect Expert or other Azure related certifications. Equivalent combination of related education, training and experience may be considered in place of the above qualifications. You’re curious, passionate, authentic and accountable. These are our values and influence everything we do. Preferred Qualifications Certified Kubernetes Administrator or other related certifications. Experience with large-scale, multi-platform enterprise systems. Familiarity with automation through interaction with remote APIs (REST). Experience with Red Hat Enterprise Linux and/or Windows Server Technologies. Experience with LDAP, ADFS, SAML, or other Identity and Access Management Solutions. Experience with on-prem hypervisor technologies. Experience with WAN technologies and how they interact with public cloud solutions. Diverse and Inclusive At SAS, it’s not about fitting into our culture – it’s about adding to it. We believe our people make the difference. Our diverse workforce brings together unique talents and inspires teams to create amazing software that reflects the diversity of our users and customers. Our commitment to diversity is a priority to our leadership, all the way up to the top; and it’s essential to who we are. To put it plainly: you are welcome here. Additional Information Please insert appropriate compliance verbiage for your country. SAS only sends emails from verified “sas.com” email addresses and never asks for sensitive, personal information or money. If you have any doubts about the authenticity of any type of communication from, or on behalf of SAS, please contact Recruitingsupport@sas.com. #SAS
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France