Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 9.0 years
20 - 27 Lacs
Gurugram
Hybrid
Apply now! vineeta@black-turtle.co Hiring for one of our MNC client for fulltime role. Total Year of EXP: Company: Current Location: Office location: CTC: ECTC: Notice Period: Job Details: Experience -6+ years Location: Gurgaon 4-7 years experience of software design & development with strong understanding of software development principles; proficient in Golang, writing testable code . Good knowledge of cloud technologies (e.g. AWS/GCP), and DevOps tools (e.g. GIT, Bitbucker, Jenkins etc.) with ability to handle performance issue, scalability issues, and reliability. Well versed in Agile methodologies, and data processing pipelines. It is also advantageous to have good knowledge on SRE concepts (e.g. SLOs/SLIs, error budget, anti-fragility, etc.). Knowledge on APM tools (e.g. Dynatrace, AppDynamics, Datadog) and log monitoring tools (e.g. Sumologic, Splunk etc.) is also desirable. Collaborative and excellent verbal and written communication skills especially when building playbooks and documentation processes.
Posted 7 hours ago
6.0 - 8.0 years
15 - 25 Lacs
Chennai
Work from Office
Job Summary Develop Automated Solutions Cloud automation engineers design and develop automated solutions for cloud-based infrastructure such as scripts and code to provision and configure resources. Responsibilities Maintain and Optimize Cloud-Based Infrastructure: Cloud automation engineers are responsible for ensuring that cloud-based infrastructure is maintained and optimized. They monitor performance diagnose and troubleshoot issues and make necessary changes to optimize system performance. Support for the inhouse tools: Responsible for maintaining the webpages and strong knowledge on Python Django Web Languages HTMLCSSPHP and Maria DB Ability to provision monitor optimize and scale AzureAWS infrastructure using APIs mplement Continuous IntegrationContinuous Deployment CICD Processes: Cloud automation engineers are responsible for implementing and maintaining CICD processes for cloud-based infrastructure. They ensure that changes are tested approved and deployed in an automated and secure manner. Ensure Security and Compliance: Cloud automation engineers ensure that cloud-based infrastructure meets security and compliance requirements. They work with IT security and compliance teams to implement security and compliance controls and ensure that they are being followed. Collaborate with IT Teams Cloud automation engineers work closely with other IT teams including developers operations and security teams. They Collaborate with these teams to ensure that cloud-based infrastructure meets the organizations requirements and can support its goals. What we need Strong understanding in automation framework Strong understanding in Operating System and Command line Strong Hands on knowledge on Python Django Web Languages HTMLCSSPHP and Maria DB Handling incidentsTasks via Service now and meets SLA Strong Understanding in Cloud computing MultiCloudHybrid Solid understanding of computer programming and software development Ability to understand and troubleshoot complex systems Strong problem-solving skills Excellent organizational skills and attention to detail Creative thinking skills Excellent verbal and written communication skills Strong analytical skills Excellent manual dexterity Ability to work well within a multi-disciplinary team structure but also independently Ability to communicate well with other members of the team Collaborate with other business units to understand how automation can improve workflow. Must be a self-starter with a strong attention to detail. Should be able to accommodate in 247 Shift if required - For now it is UK Shift timings. Ability to keep up with the latest technologies A desire to continually upgrade technical knowledg
Posted 7 hours ago
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 7 hours ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 8 hours ago
5.0 - 10.0 years
5 - 15 Lacs
Pune, Chennai, Bengaluru
Hybrid
Python Automation Testing Interview process: L1 -Virtual round L2/L3 -Face to Face interview ,location any Aziro office(chennai,Bangalore,Pune,Noida office), L4/Final level - Virtual round If your interested on these ,please apply for the job. Job Description : Requirements l Strong knowledge in Python . l Hands on experience in Linux. l Should be very strong in understanding the test case and automate the test steps adhering to the framework and development practices. l Ability to write scripts and tools for development and debugging. l Seeking a skilled Python Automation Engineer with hands-on experience in Selenium and API automation l Proficiency in Object-Oriented Programming is a must. l Additionally, familiarity with Linux. l Should demonstrate self-drive, Effective communication and proactive follow-up and be able to work in a fast paced environment where requirements keep on evolving. l Additional Skills:Domain experience from Networking/Storage/Embedded/ Vmware/Telecom
Posted 10 hours ago
10.0 - 14.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary Implement workload identity solutions for containerized and serverless workloads e.g. Kubernetes Lambda) in alignment with overall workload IAM strategy. Configure and manage workload identities within cloud-native platforms Responsibilities Ensure that containerized applications and orchestration systems e.g. Kubernetes are configured to securely utilize workload identities. Implement best practices for managing workload identities in containerized deployments. Automate the provisioning and deprovisioning of workload identities in response to cloud-native workload lifecycle events e.g. container creation or deletion scaling. Implement security best practices for workload identities in cloud native environments Integrate cloud-native workloads with enterprise identity providers using workload identity federation Implement SPIFFE SPIRE for workload identity management in cloud-native environments if required Collaborate with security and operations teams to ensure that workload identity solutions meet the security and operational requirements of cloud-native applications Certifications Required Azure GCP
Posted 10 hours ago
5.0 - 10.0 years
13 - 22 Lacs
Pune
Work from Office
SUMMARY Job Role: Node.js with Azure Developer Location: Pune Experience: 5+ years Must-Have: The ideal candidate should possess a minimum of 4 years of relevant experience in Node.js with Azure Development. We are seeking a motivated and skilled Azure AAD Developer with expertise in crafting cloud-based solutions using Microsoft Azure and Node.js. This position is perfect for an individual who is enthusiastic about advancing in cloud-native development and contributing to the creation of scalable integration solutions. What You Will Do: Develop and manage integration workflows utilizing Azure Logic Apps and Azure Functions. Aid in the implementation of messaging solutions using Azure Service Bus, Event Grid, and Event Hub. Provide support for API development and management using Azure API Management. Work collaboratively with senior developers and architects to deliver scalable cloud solutions. Participate in code reviews, testing, and deployment processes. What You Will Need: Education & Experience: 4 to 6 years of experience in software development, with a minimum of 2+ years in Azure. BE/BTech degree in a technical field or equivalent combination of education and experience. Knowledge, Skills & Abilities: Proficiency in Node.js and JavaScript development. Experience in API and RESTful service development. Exposure to Azure integration tools and messaging services. Cloud Development Experience Azure (App services, API Management, Azure Function, Azure Logic Apps) AZ-204 certification is a plus. Hands-on experience with Azure Logic Apps, Azure Functions, Azure messaging services, API Management, Azure Service Bus, Event Grid, and Event Hub. Strong problem-solving and communication skills. [ Reporting Relationships: ]() Will report to a Manager, Product Delivery and has no direct reports. Working Conditions: The work environment will primarily be an air-conditioned office setting requiring the employee to sit for prolonged periods while concentrating on a computer screen. Requirements Requirements: 4-6 years of software development experience, with at least 2+ years in Azure BE/BTech degree in a technical field or equivalent combination of education and experience Proficiency in Node.js and JavaScript development AZ-204 certification is a plus
Posted 11 hours ago
6.0 - 11.0 years
13 - 20 Lacs
Hyderabad
Remote
Role & responsibilities Minimum 6+ years of hands-on experience as an SAP BASIS Administrator. Strong expertise in SAP NetWeaver, SAP HANA, and S/4HANA administration. Experience in SAP system upgrades, migrations, and performance tuning. Should be have good experience in ECC to S4 Hana Conversion. Familiarity in Old Hana version to New Hana Version upgrading. Migration of On Premise system to Cloud. Knowledge of OS administration (Windows/Linux) and database management (HANA, Oracle, SQL). Familiarity with SAP security, transport management, and system monitoring. Experience working with cloud-based SAP deployments (AWS, Azure, GCP) is a plus. Strong troubleshooting skills and ability to resolve system issues efficiently. Ability to work independently and in a team-oriented, fast-paced environment.
Posted 11 hours ago
5.0 years
8 - 12 Lacs
Hyderabad
Work from Office
When our values align, there's no limit to what we can achieve. At Parexel, we all share the same goal - to improve the world's health. From clinical trials to regulatory, consulting, and market access, every clinical development solution we provide is underpinned by something special - a deep conviction in what we do. Each of us, no matter what we do at Parexel, contributes to the development of a therapy that ultimately will benefit a patient. We take our work personally, we do it with empathy and we're committed to making a difference. Key Accountabilities : Using Microsoft Azure data PaaS services, design, build, modify, and support data pipelines leveraging DataBricks and PowerBI in a medallion architecture setting. If necessary, create prototypes to validate proposed ideas and solicit input from stakeholders. Excellent grasp of and expertise with test-driven development and continuous integration processes. Analysis and Design – Converts high-level design to low-level design and implements it. Collaborate with Team Leads to define/clarify business requirements, estimate development costs, and finalize work plans. Run unit and integration tests on all created code – Create and run unit and integration tests throughout the development lifecycle Benchmark application code proactively to prevent performance and scalability concerns. Collaborate with the Quality Assurance Team on issue reporting, resolution, and change management. Support and Troubleshooting – Assist the Operations Team with any environmental issues that arise during application deployment in the Development, QA, Staging, and Production environments. Assist other teams in resolving issues that may develop as a result of applications or the integration of multiple component. Knowledge and Experience : Understanding of design concepts and architectural basics. Knowledge of performance engineering. Understanding of quality processes and estimate methods. Fundamental grasp of the project domain. The ability to transform functional and nonfunctional needs into system requirements. The ability to develop and code complicated applications is required. The ability to create test cases and scenarios based on specifications. Solid knowledge of SDLC and agile techniques. Knowledge of current technology and trends. Logical thinking and problem-solving abilities, as well as the capacity to collaborate. Primary skills: Cloud Platform, Azure, Databricks, ADF, ADO. Sought: SQL, Python, PowerBI. General Knowledge: PowerApps, Java. 3-5 years of experience in software development with minimum 2 years of cloud computing. Education: Bachelor of Science in Computer Science, Engineering, or related technical field.
Posted 12 hours ago
3.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Remote
Senior Cloud Engineer Job Description Position Title: Senior Cloud Engineer -- AWS Location: Remote] Position Overview The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives through innovative cloud engineering. Key Responsibilities Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP) Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence Stay current with emerging cloud technologies, trends, and best practices, recommending improvements and driving innovation Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef) Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions Experience with cloud security, governance, and compliance frameworks Excellent analytical, troubleshooting, and root cause analysis skills Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams Ability to work independently, manage multiple priorities, and lead complex projects to completion Preferred Qualifications Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect) Experience with cloud cost optimization and FinOps practices Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.) Exposure to cloud database technologies (SQL, NoSQL, managed database services) Knowledge of cloud migration strategies and hybrid cloud architectures
Posted 12 hours ago
3.0 - 5.0 years
3 - 8 Lacs
Noida
Work from Office
Roles & Responsibilities: Proficient in Python including, Github, Git commands Develop code based on functional specifications through an understanding of project code Test code to verify it meets the technical specifications and is working as intended, before submitting to code review Experience in writing tests in Python by using Pytest Follow prescribed standards and processes as applicable to software development methodology, including planning, work estimation, solution demos, and reviews Read and understand basic software requirements Assist with the implementation of a delivery pipeline, including test automation, security, and performance Assist in troubleshooting and responding to production issues to ensure the stability of the application Must-Have and Mandatory: Very Good experience in Python Flask, SQL Alchemy, Pytest Knowledge of Cloud like AWS Cloud , Lambda, S3, Dynamo DB Database - Postgres SQL or MySQL or Any relational database. Can provide suggestions for performance improvements, strategy, etc. Expertise in object-oriented design and multi-threaded programming Total Experience Expected: 04-06 years
Posted 12 hours ago
2.0 - 4.0 years
6 - 10 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 2+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor
Posted 13 hours ago
2.0 - 4.0 years
10 - 20 Lacs
Mumbai
Work from Office
DevOps Engineer: Congratulations, you have taken the first step towards bagging a career-defining role. Join theteam of superheroes that safeguard data wherever it goes. What should youknow about us? Seclore protects and controls digital assets to help enterprises preventdata theft and achieve compliance. Permissions and access to digital assets canbe granularly assigned and revoked, or dynamically set at the enterprise-level, including when shared with external parties. Asset discovery and automatedpolicy enforcement allow enterprises to adapt to changing security threats andregulatory requirements in real-time and at scale. Know more about us at www.seclore.com You would love our tribe: If you are a risk-taker, innovator, and fearless problem solver wholoves solving challenges of data security, then this is the place for you! Role: DevOps Engineer Experience: 2-4 Years Location: Mumbai (Regional Office) A sneak peek intothe role: This position is for individuals who possess the ability to identify multiple solutions to the same problem and can help in decision making while working in a super-agile environment Here's what you will get to explore: In this role you will be using AWS CDK & Python to design, develop, test, secure & deploy services from planning to production. Combine technology, tools, and global best practices of DevOps for innovation, efficiency, and compliance Build Infrastructure-as-a-code using DevOps tools and technologies to commission and configure, monitor, and maintain Seclore cloud offering on AWS Automate application deployment, configuration, and testing for scalable and fault-tolerant delivery Continuously improve automation and monitoring tools for better effectiveness and efficiency Understand Seclore product features, technology platform and deployment and configuration options Work closely with other team to understand requirements, upcoming features, and impact on the cloud infrastructure to ensure DevOps requirements are communicated and understood We can see the next Entrepreneur At Seclore if you: A technical degree (Engineering, MCA) from a reputed institute 1year of DevOps experience 2+ Proven hands-on experience in Python programming language with design, code, debugging skills Working knowledge of Infrastructure as code Good verbal and written communication skills to interact with technical and non-technical staff. An analytical frame of mind to identify and evaluate multiple solutions to the same problem and come up with a solution roadmap. GOOD TO HAVE Experience as an automation developer for a cloud Product. Experience in software development, automaton, or DevOps. Experience with cloud environments such as GCP, AWS etc. (AWS is an advantage) Experience of writing applications or automation tools using one or more of Jenkins, Ansible, Batch script, Shell script etc Experience of working with containerization and orchestration tools like Docker, Kubernetes, ECS etc Are tech agnostic, think innovatively and take calculated risk Why do we call SecloritesEntrepreneurs not Employee: We have an attitude of a problem solver and an aptitude that is techagnostic. You get to work with the smartest minds in the business. We value and support those who take the initiative and calculate risks. We are thriving not living. At Seclore, it is not just about work butabout creating outstanding employee experiences. Our supportive and openculture enables our team to thrive. Excited to be the next Entrepreneur, apply today! Dont have some of the above points in your resume at the moment? Dontworry. We will help you build it. Lets build thefuture of data security at Seclore together.
Posted 13 hours ago
5.0 - 8.0 years
8 - 13 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery Automate cloud infrastructure using Terraform Write unit tests, integration tests and performance tests Work in a team environment using agile practices Monitor and optimize application performance and infrastructure costs Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Work closely with cross-functional teams to ensure seamless integration and operation of services Proficiency JavaScript for full-stack development Strong experience with AWS cloud services, including EKS, Lambda, and S3 Knowledge of Docker containers and orchestration tools including Kubernetes
Posted 1 day ago
12.0 - 14.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary Security Project Manager Develop detailed project plans timelines and resource allocation for both tactical and strategic phases Responsibilities Ensure the project stays within defined scope and meets the stated objectives and acceptance criteria. Facilitate communication and collaboration among various stakeholders including IAM teams cloud teams security teams and application owners. Conduct Discovery Sessions and Workshops. Develop and Track Project Deliverables and Report Status. Manage Project Risks and Issues. Ensure Adherence to Project Management Methodologies Facilitate User Acceptance Testing UAT. Certifications Required Operations Management
Posted 1 day ago
14.0 - 16.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary Configure and deploy workload identity features and services within each cloud providers ecosystem. Configure and manage Workload Identity Federation for secure access scenarios such as on-premises to GCP. Responsibilities Integrate Workload Identities with Containerized and Serverless Workloads. Automate Identity Provisioning and Deprovisioning Implement Security Best Practices Implement and manage features like Entra ID Protection for Workloads to detect investigate and remediate identity based risks. Implement Conditional Access Policies for Workload IDs. Implement and Manage Managed Identities and Service Principals. Potentially deploy and manage SPIFFE SPIRE infrastructure for secure workload identities across heterogeneous environments. Work closely with development and operations teams to onboard applications and workloads to the new IAM system. Investigate and resolve any security incidents or issues related to workload identities Certifications Required Azure Cloud
Posted 1 day ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 day ago
6.0 - 11.0 years
20 - 35 Lacs
Bengaluru
Work from Office
About Us Oracle Enterprise Performance Management (EPM) Cloud is an industry-leading SaaS suite of products that helps organizations across the world to model and plan across finance, HR, supply chain, and sales, streamline the financial close process, and drive better decisions. EPM Cloud is deployed in Oracle Cloud Infrastructure (OCI) platform. Our team is responsible for the common software platform on which all EPM Cloud suite of products run. This includes automated provisioning, configuration, deployment, diagnostics, monitoring, capacity planning, security, high availability, performance management, disaster recovery, etc. What youll do You will work in the product development team on the EPM Cloud common software platform. Your tasks include: Design and develop complex software for EPM Cloud platform. Perform root cause analysis of issues in the production environment, provide solutions in a timely manner, and design enhancements so that the similar issues are not reported in future. Collaborate with other EPM Cloud teams and Operations team on various projects and customer issues. Document the design and project proposals. Come up with innovative ideas for continuous improvements to the product. Required Qualifications Bachelor in Computer Science and Engineering 6+ years of software design and development experience 2+ year of experience in developing software running in cloud environment (e.g., OCI, Amazon EC2, Microsoft Azure) 3+ year of developing Java-based applications Experience with REST APIs Basic knowledge of Linux platform Understanding of database technologies Strong troubleshooting and problem solving skills Preferred Qualifications 4+ years of experience in developing software running in cloud environment 4+ years of developing Java-based applications Experience on Linux platform Experience in Oracle database Strong scripting skills (e.g., Bash, Python, Groovy, Perl)}
Posted 2 days ago
4.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Hybrid
Senior Member of Technical Staff Exp : 3+ years Location: Bangalore Skills: User Interface, HTML, CSS, Jquery, React Short description : We are seeking hands-on a Senior Member of Technical Staff who shares our passion and excitement of operating distributed systems at hyper scale using cloud native best practices. You will have part in the disruption of the health care industry and will help deliver better patient care. As a SMTS on our team, youll be responsible and lead efforts in designing and building scalable, distributed, and resilient software components and services to support health care platform, applications, and our end users. We believe in ownership and expect you to think long term, mentor, and empower other engineers. As a tech lead you will own the complete SDLC from architecture, development, testing, first class monitoring, to production. Job description : Building off our Cloud momentum, Oracle has formed a new organization - Oracle Health Applications & Infrastructure. This team focuses on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering centre with the focus on excellence. At Oracle Health, our mission is to improve healthcare and quality of life globally through better experience and easier access to health and research data for patients and healthcare providers.We are looking for hands-on engineers with expertise and passion in solving difficult problems in all areas of software engineering: distributed systems, identity, security, observability, and user experience.This is a greenfield opportunity to design and build new cloud centric applications from the ground up. We are growing fast, still at an early stage, and working on ambitious new initiatives. An engineer at any level can have significant technical and business impact here. You will be part of a team of smart, motivated, diverse people, and given the autonomy as well as support to do your best work. It is a dynamic and flexible workplace where you'll belong and be encouraged.Who are we looking for?We are seeking hands-on a Senior Member of Technical Staff who shares our passion and excitement of operating distributed systems at hyper scale using cloud native best practices. You will have part in the disruption of the health care industry and will help deliver better patient care. As a SMTS on our team, youll be responsible and lead efforts in designing and building scalable, distributed, and resilient software components and services to support health care platform, applications, and our end users. We believe in ownership and expect you to think long term, mentor, and empower other engineers. Qualifications and Experience: BS degree in Computer Science or related field (MS preferred) 3+ years distributed service engineering experience in a software development environment Expert in UI, html, css Deep understanding of object-oriented design and SDK development, specifically within a cloud environment Strong skills in data structures, algorithms, operating systems, and distributed systems fundamentals. Experience working closely with architects, principals, product and program managers to deliver product features on time and with high quality. Experience with containers and container orchestration technologies (Kubernetes, Docker) Good understanding of databases, NoSQL systems, storage and distributed persistence technologies. Good understanding of Linux Knowledge of OCI or AWS, Azure, GCP, Public Cloud SaaS, PaaS Services and/or related technology Good to have experience with Cloud Engineering Infrastructure Development (like Terraform) Good to have demonstrable technical leadership and mentorship skills Career Level - IC3 Responsibilities : Design and implement intuitive and seamless customer experiences. Proficiency in Agile methodologies, especially Scrum. Experience using ticket tracking systems such as JIRA. Ability to quickly translate wireframes into prototypes and production-ready interfaces. Quick learner with the ability to pick up new languages and technologies. Self-driven and able to work independently on projects, even as designs evolve. Passionate about learning and staying updated with new technologies and services. Strong communication skills, including cross-team collaboration. Ability to deliver basic functionality and iterate rapidly. Experience working with geographically distributed teams. Significant plus: Knowledge of healthcare and experience delivering healthcare applications.
Posted 2 days ago
8.0 - 11.0 years
15 - 25 Lacs
Pune
Work from Office
Greetings Of the Day !!! We are looking forward to hire SQL Professionals in the following areas: Job Description: Experience required 5 Years Mandatory requirements Rich experience in SQL/Procedures In depth knowledge in Data analytics Very good knowledge in SQL Server internals Self-initiator and excellent team player Very good experience in Performance tuning and optimisation Good experience in data modelling (ERM and Dimensional) Nice to have Experience Experience in Snowflake Experience in Cloud Experience in Git Experience in Migration Projects , you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Job Description Print Preview
Posted 2 days ago
10.0 - 12.0 years
35 - 50 Lacs
Chennai
Work from Office
Role: Solution Architect Experience: 12+ Years Skills: Expertise in one of the technologies - Java or .NET Expertise in one of the cloud platforms – AWS/ Azure/ GCP/ OpenShift/ PCF Additional preference: Experience in AI / ML including Generative AI Job Description: A Solution Architect with proven track record in designing and implementing solutions for large-scale projects in a multi-site, multi-supplier environment. Key Responsibilities: Collaborate closely with the customer's IT teams and business stakeholders to understand their needs and align solutions with business goals. Develop and maintain solution architectures, including detailed designs, roadmaps, and implementation strategies. Orchestrate solutions across various technology domains such as infrastructure, data, integration, ERP, CRM, AI, and security to create cohesive solutions. Quickly understand and align with customer business domains, their value streams, and capabilities to propose appropriate technical solutions. Guide customers through the solution implementation process, from initial design to deployment and optimization. Perform typical Solution Architect tasks – creating architecture diagrams, conducting research, evaluating products, providing strategic direction, and delivering presentations. Apply knowledge of architecture patterns, styles, and design patterns to create robust and scalable solutions. Demonstrate complex problem-solving skills and ability to communicate technical concepts to both technical and non-technical audiences. Possess expertise in at least one major cloud platform – AWS, Azure, GCP, OpenShift, or PCF (based on customer preference). Design end-to-end solutions that integrate multiple systems within an enterprise. Exhibit a strong understanding of web technologies, concepts, tool-based approaches, web security, and relevant open-source products. Demonstrate experience in data architecture, including proficiency in at least one RDBMS and one NoSQL database. Understand and apply DevSecOps principles and automation in solution designs. Have practical experience with Agile and Iterative delivery methodologies. TOGAF or similar solution architecture certification is preferred but not mandatory.
Posted 2 days ago
16.0 - 18.0 years
35 - 65 Lacs
Chennai
Work from Office
Expertise in one of the technologies - Java or .NET Expertise in one of the cloud platforms – AWS/ Azure/ GCP/ OpenShift/ PCF Additional preference: Experience in AI / ML including Generative AI
Posted 2 days ago
12.0 - 15.0 years
35 - 60 Lacs
Chennai
Work from Office
AWS Solution Architect: Experience in driving the Enterprise Architecture for large commercial customers Experience in healthcare enterprise transformation Prior experience in architecting cloud first applications Experience leading a customer through a migration journey and proposing competing views to drive a mutual solution. Knowledge of cloud architecture concepts Knowledge of application deployment and data migration Ability to design high availability applications on AWS across availability zones and availability regions Ability to design applications on AWS taking advantage of disaster recovery design guidelines Design, implement, and maintain streaming solutions using AWS Managed Streaming for Apache Kafka (MSK) Monitor and manage Kafka clusters to ensure optimal performance, scalability, and uptime. Configure and fine-tune MSK clusters, including partitioning strategies, replication, and retention policies. Analyze and optimize the performance of Kafka clusters and streaming pipelines to meet high-throughput and low-latency requirements. Design and implement data integration solutions to stream data between various sources and targets using MSK. Lead data transformation and enrichment processes to ensure data quality and consistency in streaming applications Mandatory Technical Skillset: AWS Architectural concepts - designs, implements, and manages cloud infrastructure AWS Services (EC2, S3, VPC, Lambda, ELB, Route 53, Glue, RDS, DynamoDB, Postgres, Aurora, API Gateway, CloudFormation, etc.) Kafka Amazon MSK Domain Experience: Healthcare domain exp. is required Blues exp. is preferred Location – Pan India
Posted 2 days ago
10.0 - 12.0 years
35 - 50 Lacs
Chennai
Work from Office
Practice Consultant/Lead – Legacy Modernization Architect Location: India Experience: 12–15+ years Overview: S eeking experienced Legacy Modernization Architect Consultants with deep expertise in mainframe-based systems, legacy platform transformations, and enterprise-grade modernization initiatives. Ideal candidates will have hands-on experience in solutioning and technical leadership for projects transitioning from legacy platforms to modern technology stacks including cloud-native and hybrid models. Responsibilities: Support in the Definition and evolving of the practice vision, offerings, and strategic roadmap Lead As-IS assessments of legacy environments (application, infra, data) Create target state architecture, roadmap, and modernization approach Evaluate rehosting, refactoring, rearchitecting, and replatforming options Provide architectural governance across design and delivery lifecycle Build and mentor high-performing consulting and architecture teams Represent the practice in analyst briefings, webinars, and leadership forums Drive IP and accelerator development for modernization services Qualifications: Deep understanding of legacy systems (Mainframe, monoliths, ERP) and proven engagements showcasing Mainframe skills and Digital transformation experience and expertise Proven track record in application modernization, cloud transformation, and EA Lead end-to-end solutioning for legacy transformation engagements Analyze mainframe-based ecosystems and define modernization strategies (Rehost, Refactor, Re-architect, Replace) Guide technical implementation including re-platforming, containerization, microservices, and integration Collaborate with cross-functional teams including business SMEs, architects, developers, and DevOps Define migration patterns and manage technical risks during transitions Engage with client stakeholders across industries (Insurance, Finance, Healthcare, Retail, etc.) Strong understanding of TOGAF, cloud-native patterns, and DevOps Experience in application portfolio rationalization and cloud migration Familiar with frameworks like TOGAF, ArchiMate, and AWS/Azure/GCP architectures Excellent leadership, storytelling, and stakeholder engagement skills Required Skill Sets: 8–10 years of hands-on experience in Mainframe-based applications (COBOL, JCL, VSAM, DB2, CICS, IMS) Experience as Technical Lead/Architect in large modernization projects Deep knowledge of modernization tools and platforms (e.g., Micro Focus, Raincode, IBM z/OS Connect, AWS Mainframe Modernization) Expertise in API enablement, middleware, data migration, and system decomposition Familiarity with containerization (Docker, Kubernetes), CI/CD pipelines, and service mesh Strong understanding of at least one cloud provider (AWS, Azure, or GCP) Experience in multi-domain projects (Insurance, Banking/Finance, Healthcare, Retail, etc.) Excellent client interaction, communication, and documentation skills Preferred Certifications: TOGAF or equivalent Enterprise Architecture certification Cloud certifications (AWS Architect, Azure Solutions Architect, etc.) Nice to Have: Exposure to business rule extraction tools Experience in Agile/Scrum delivery and DevOps integration
Posted 2 days ago
6.0 - 11.0 years
4 - 9 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Work from Office
About Oracle FSGIU Oracle Banking Payments Oracle Banking Payments, a flagship solution within Oracle FSGIU, serves as a modern, standalone payments hub and processor. It offers comprehensive, out-of-the-box support for SWIFT and other global/domestic clearing networks, including SEPA, SEPA Instant, US Real-time Payments, Fedwire, NEFT, IMPS, and CNAPS. As a core product within the Oracle Banking suite, it enables banks to replace fragmented legacy systems with a unified and efficient payments infrastructure. Built on the ISO 20022 messaging standard, the product is well-positioned to support regions transitioning to next-generation payment formats. Developed using Java and based on a microservices architecture, Oracle Banking Payments is scalable, highly integrative, and supports both retail and corporate segments. Its robust integration capabilities ensure seamless connectivity with external systems such as Core Banking, DDA, Sanctions Screening, and Treasury platforms. About the Role Launched in 2017, Oracle Banking Payments continues to evolve with an ambitious roadmap covering both functional enhancements and modern technology stacks. This is a unique opportunity to join a high-impact development team working on a globally recognized, mission-critical banking product. Role: Senior Software Developer – Oracle Banking Payments Responsibilities: As a Senior Software Developer, you will: Translate business requirements into scalable, maintainable technical designs and code. Develop and maintain components using Java, Spring, and microservices frameworks. Diagnose and resolve technical issues across environments. Lead initiatives to identify and fix application security vulnerabilities. Deliver high-quality code with minimal production issues. Guide and mentor junior developers, fostering a culture of technical excellence. Navigate ambiguity and drive clarity in fast-paced Agile environments. Communicate clearly and proactively with cross-functional teams. Mandatory Skills: Expertise in Java, Java Microservices, Spring Framework, EclipseLink, JMS, JSON/XML, RESTful APIs. Experience developing cloud-native applications. Familiarity with Docker, Kubernetes, or similar containerization tools. Practical knowledge of at least one major cloud platform (AWS, Azure, Google Cloud). Understanding of monitoring tools (e.g., Prometheus, Grafana). Experience with Kafka or other message brokers in event-driven architectures. Proficient in CI/CD pipelines using Jenkins, GitLab CI, etc. Strong SQL skills with Oracle databases. Hands-on debugging and performance tuning experience. Nice to Have: Experience with Oracle Cloud Infrastructure (OCI). Domain knowledge of the payments industry and processing flows. What We’re Looking For: The ideal candidate is: A passionate coder with a deep understanding of Java and modern application design. Curious, resourceful, and persistent in solving problems using various approaches—from research and experimentation to creative thinking. A proactive mentor and team contributor with a strong sense of accountability. Adaptable to evolving technology landscapes and fast-paced environments.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2