Jobs
Interviews

36252 Kubernetes Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructure. Your typical day will involve collaborating with various teams to ensure the architecture's viability, security, and performance while creating proofs of concept to validate your designs. You will engage in hands-on development and troubleshooting, ensuring that the solutions meet the required standards and specifications. Additionally, you will be involved in continuous improvement efforts, optimizing existing systems and processes to enhance efficiency and effectiveness in cloud operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate the performance of cloud applications to ensure optimal functionality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture and deployment strategies. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role: Product Owner - CaaS (Kubernetes/OpenShift) Work Locations:(Chennai) Job Summary: Clarium is looking for a dynamic and technically proficient Product Owner to lead initiatives within our Container-as-a-Service (CaaS) platform. This role requires a strong understanding of Kubernetes/OpenShift administration, DevOps practices, and production support in Linux-based environments. The ideal candidate will possess deep technical insight, hands-on experience in container ecosystems, and a passion for innovation and continuous improvement. Key Responsibilities: Define and maintain the product roadmap for the internal CaaS platform built on Kubernetes/OpenShift. Work closely with cross-functional teams including engineering, operations, and business stakeholders to gather requirements, prioritize features, and ensure timely delivery. Lead backlog refinement, sprint planning, and ensure delivery aligns with business objectives and technical strategy. Serve as the technical liaison between development teams and stakeholders, especially in the areas of K8s/OpenShift, CI/CD, and container security. Continuously evaluate emerging technologies in the container space (e.g., ACM, OSV, etc.) and identify opportunities for adoption. Support incident response and production support efforts, ensuring platform reliability and performance. Drive continuous improvement through feedback, data analysis, and team retrospectives. Required Qualification: 10+ years of experience in a Product Owner or Technical Lead role with deep exposure to container platforms. Hands-on knowledge of Kubernetes and Red Hat OpenShift administration. CKA certification (Certified Kubernetes Administrator) is preferred. Strong experience in Linux system administration and shell scripting. Solid background in DevOps, including CI/CD pipelines, infrastructure as code (e.g., Ansible, Helm, GitOps). Proven experience in production support, troubleshooting, and issue resolution. Ability to grasp new technologies quickly, especially emerging tools in the container space such as ACM (Advanced Cluster Management) and OSV (OpenShift Virtualization). Excellent communication skills and the ability to translate complex technical topics into business-aligned priorities Preferred Qualifications: Familiarity with Red Hat ecosystem tools (e.g., Ansible, Podman, RHEL). Experience with public or hybrid cloud (AWS, Azure, or GCP). Agile certification (CSPO, SAFe PO/PM) or relevant experience in Scrum/Agile methodologies. Experience working with compliance and security standards in enterprise environments. Work Environment: Fast-paced, enterprise-grade infrastructure with high uptime requirements Expectation to work during critical incidents, maintenance windows, and release weekends. Team collaboration across IT operations, development, QA, and DevOps Compensation & Benefits: Competitive salary based on experience. Paid time off and holidays. Professional development opportunities.

Posted 1 week ago

Apply

3.0 - 31.0 years

4 - 8 Lacs

Arakere, Bengaluru/Bangalore

On-site

Neridio is looking for senior Linux System software developers with experience with any programming language preferably C/ Shell scripting in Linux and with Network protocols like TCP/IP, NFS and storage networking protocols experience with container technologies/Docker/Kubernetes is a plus Desirable to have the knowledge of File systems security.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Location: Navi Mumbai SettleMint India SettleMint India was formed in 2019, with headquarters in Delhi, India. The India team focuses on client deliverables and the development of high-performance low-code Blockchain. We operate from Delhi, along with certain project locations. We are looking for a DevOps Engineer to join our client site at Navi Mumbai. Responsibilities Building efficient and reusable applications and abstractions. Driving design, implementation, and support of large-scale infrastructure. You will participate in the design and implementation phases for new and existing products Dive deep to resolve problems at their root and troubleshoot services related to the big data stack in our Linux infrastructure. Developing policies and procedures that improve overall platform stability and participate in shared on-call schedules. Ensure that post-production operational processes/deliverables are well designed and implemented prior to the project moving into the solution support phase. Enhance and maintain our monitoring infrastructure. Develop automation tools for managing our on-premises infrastructure. Define and create development procedures, processes, and scripts to drive a standard software development lifecycle. Assist in the evaluation, selection, and implementation of new technologies with product teams to ensure adherence to architecture guidelines for new technology introduction Provide technical leadership in establishing standards and guidelines. Facilitate collaboration between development and operations teams throughout the application lifecycle. Requirements And Skills Must have 3 - 5 years of hands-on experience in the field of DevOps should have working knowledge of Kubernetes. At least 3 years of experience on Jenkins/Azure DevOps and other similar CI/CD platforms such as GitHub. Extensive experience in assessing DevOps maturity state for application with ability to define improvement roadmap. Extensive experience in assessing and design code branching, merging, and tagging strategies on GitHub, Bitbucket, SVN and Git. Extensive experience in defining and implementing DevSecOps(security) strategy for customers. Authentication, SonarQube, Nexus IQ (Sonar type IQ), Fortify(SAST), Sonar type firewall and other similar tools in Jenkins/Azure DevOps CI / CD pipeline. Experience of deploying APIs and micro services as Docker images/container as Helm chart pkg(terraform) on cloud cluster i,e Kubernetes clusters using CI/CD Pipelines. Experience of Terraform/Ansible is must for infra build(provisioning), configuration & deployment. Experience of protocols implementation like HTTPS, TCP, UDP, DNS, TELNET,ICMP,SSH,GOSSIP etc. Good to have Winscp/Putty tools experience. Experience of Azure / AWS cloud and its PaaS and IaaS Services Experience with the Monitoring tool like Data dog/New relics/Dynatrace/Prometheus/Grafana etc Qualifications And Certification B.E / B.tech MCA CKA / CKAD Certified

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Enterprise Architecture services, will provide you with the opportunity to bring our clients a competitive advantage through defining their technology objectives, assessing solution options, and devising architectural solutions that help them achieve both strategic goals and meet operational requirements. We help build software and design data platforms, manage large volumes of client data, develop compliance procedures for data management, and continually researching new technologies to drive innovation and sustainable change. Responsibilities Design solutions for cloud (e.g. AWS, Azure and GCP) which are optimal, secure, efficient, scalable, resilient and reliable, and at the same time. are compliant with Industry cloud standards and policies.J6 +Design strategies and tools to deploy, monitor, and administer cloud applications and the underlying services for cloud (e.g. Azure, AWS, GCP and private cloud) +Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS. +Should have experience on Infra set-up, Availability Zones, Cloud Services deployment, connectivity set-up inline with AWS, Azure, GCP and OCI +Should have skill set around GCP, AWS, Oracle Cloud and Azure and Multi Cloud Strategy Excellent hands-on experience in implementation and design of Cloud infrastructure environments using modern CICD deployment patterns with Terraform, Jenkins, and Git. Strong understanding of application build and Deployments with CICD pipelines. Strong experience application containerization and orchestration with Docker and Kubernetes in Cloud Platforms. Mandatory Skill Sets Architect & Design solutions for cloud (AWS, Azure, GCP and private cloud), Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS... Design of Cloud infrastructure environments...application containerization and orchestration with Docker and Kubernetes in Cloud Preferred Skill Sets Certification would be preferred in AWS, Azure, GCP and private cloud, Kubernetes. Years Of Experience Required 5+ years Education Qualification B.E./ B.Tech / MCA/ M.E/ M.TECH/ MBA/ PGDM/ B.SC - IT. All qualifications should be in regular full-time mode with no extension of course duration due to backlogs Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology, Bachelor of Engineering, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Devops, Microsoft Azure DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

🚨 24-bytes is Hiring: Full-Time DevOps Engineer 🧠💻 Remote | Full-Time | Flexible Hours Join us in building the next generation of privacy-first VPN infrastructure. 🌐🔐 We’re looking for a DevOps Engineer who’s ready to scale secure, censorship-resistant networks for a global user base. 🔧 Responsibilities Manage VPN infrastructure (WireGuard, OpenVPN, IPSec) Implement WireGuard over TCP for stealth and bypass Automate deployments (CI/CD pipelines) Configure routing, firewall rules, NAT, TLS Monitor performance and uptime Collaborate with the backend engineering team ✅ Requirements🛡️ VPN Protocols & Network Security WireGuard (including over TCP via tunneling) OpenVPN, IPSec/StrongSwan NAT traversal, firewall evasion techniques DNS security: DoH, Unbound, encrypted DNS TLS certs: Let’s Encrypt, custom CA, etc. ⚙️ DevOps & Infra Skills Linux (Ubuntu preferred), SSH/Bash automation Docker, Kubernetes, microservices scaling CI/CD (GitHub Actions, GitLab CI, Jenkins) Monitoring: Prometheus, Grafana, ELK, Loki Infrastructure as Code: Ansible, Terraform 🌐 Hosting & Infrastructure Experience with Hetzner, OVH, Vultr, AWS Bonus: BGP, advanced networking 🌍 Work Setup 💼 Full-Time 🌐 100% Remote 🕒 Preferred 2–3 hr overlap with Gulf Standard Time 💡 Why Join 24-bytes? At 24-bytes , we believe in open access and privacy. You’ll work with a tight-knit team to deploy world-class VPN infrastructure that protects users in the most restricted regions. Expect full autonomy, ownership, and fast execution. 📩 Apply Now Send your resume to tech@24-bytes.com or DM us here on LinkedIn. Let’s build censorship-resistant, high-performance VPN tech — together. 🔐🚀 #Hiring #DevOpsEngineer #WireGuardOverTCP #VPN #PrivacyTech #RemoteJob #Linux #Docker #Kubernetes #Networking #24bytes

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

File Transfer File Transfer specialist is responsible for the administration and support of secure electronic file transfers for business applications. Responsibilities: Oversee installation, configuration, and maintenance of IBM Sterling File Gateway Provides technical support to customers, vendors, programmers, and various stake holders, using IBM Sterling File Gateway. Ongoing monitoring of file transfers. Statistics logs from Sterling File Gateway are reviewed for successful completion. Troubleshoots problems encountered when file transfers are unsuccessful preferably pro-actively. Participate in projects to support system software, provide functionality, or implement new transfer requests. Manage event rules for file transmissions. Maintain SSH keys, SSL certificates and encryption keys on required systems. Maintains vendor, customer, and provider relationships. Work will be based on Europe business hours and participates in 24 X 7 on-call rotation. Technical Skills: Expert knowledge and work experience using application - IBM Sterling File Gateway Experience in working with Windows server, Unix/Linux platforms and writing shell scripts and with XML and web services. SQL or any DB Knowledge (executing queries, database navigation) Provides technical support to customers, vendors, programmers, and various stake holders, using IBM Sterling File Gateway. Installing/patching the IBM Sterling File Gateway. Experience on various B2B communication protocols such as FTP, FTPS, SFTP, HTTP, HTTP/S, AS2 etc. Experience in using cloud products like configuring load balancer, provisioning server, network configuration etc. Good to have: ITIL certification and experience in working using Agile methodologies. IBM Sterling Connect:Direct knowledge Familiarity with Powershell scripts Knowledge on automation framework/language/engine like ansible Knowledge on IBM Control center, secure+, secure proxy, perimeter servers. Knowledge on Rest API services offered by IBM MFT suite for the product maintenance. Experience in working with Kubernetes, Storage (blob/aws s3) and other cloud deployments Exposure to AI tools, even at a basic level, is a value-add. Allianz Group is one of the most trusted insurance and asset management companies in the world. Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer. Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us. We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer. We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in. We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Great to have you on board. Let's care for tomorrow.

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi, India

On-site

About The Company Tata Communications Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Job Description Responsible for architecting and deploying solutions that combine machine learning models with full stack applications using Java and Python. This role focuses on integrating data pipelines, model inference, and API-driven front-end interfaces to automate workflows and optimize performance across systems. Drives implementation strategies aligned to product requirements and engineering standards. Responsibilities Understand IoT-specific requirements including data ingestion from edge devices, analytics needs, and user-facing application features. Lead technical discussions with cross-functional teams (e.g., hardware, cloud, analytics) to evaluate feasibility, define specifications, and assess performance and scalability for IoT solutions. Define and design software architecture for integrating IoT data pipelines, ML models, and full stack applications using Java and Python. Deliver robust features including sensor data processing, real-time analytics dashboards, and APIs for device management and control. Drive deployment of end-to-end IoT platforms – from data collection and ML model deployment to web/mobile access – with a focus on automation and resilience. Review and finalize infrastructure design including edge-cloud integration, containerized services, and streaming data solutions (e.g., Kafka, MQTT). Create and manage user stories for device-side logic, cloud-based processing, and visualizations, ensuring seamless interaction across systems like OSS-BSS and enterprise applications. Establish standards for edge computing, MLOps in IoT, and cloud-native application development (SaaS/IoT PaaS), ensuring security, scalability, and maintainability. Facilitate prioritization of features related to device data processing, predictive maintenance, anomaly detection, and real-time user interfaces. Desired Skill Sets Strong experience architecting and delivering Software applications combining real-time data, machine learning, and cloud-native full stack platforms. Hands-on expertise with Java (Spring Boot) and Python for both backend services and ML model implementation. Experience with IoT protocols (MQTT, CoAP), data streaming (Kafka, AWS Kinesis), and edge-cloud data integration. Deep understanding of software/application lifecycle management for connected device platforms. Experience working in Agile setups and DevOps pipelines with tools like Docker, Kubernetes, Jenkins, Git

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Machine Learning Engineer: Experience-5+ Yrs. Location-Pune/Bangalore/Hyderabad/Chennai JD- Expert-level proficiency in Google Cloud Platform (GCP) , demonstrating deep practical experience with Vertex AI , Big Query, Apache Beam, Cloud Storage, Pub/Sub, Cloud Composer (Apache Airflow), Cloud Run, Kubernetes Engine (GKE) concepts (for custom model serving), Docker. Strong experience leveraging GPUs/TPUs for accelerated ML training. Mastery of Python, TensorFlow and/or PyTorch, NLP libraries (e.g. spaCy, NLTK). Large-scale model training techniques, including distributed training, transfer learning, fine-tuning pre-trained models, and efficient data loading strategies Develop, fine-tune, and deploy LLMs using Vertex AI and GCP-native tools . Build and maintain NLP pipelines for tasks such as text classification, NER, question answering, summarization, and translation. Implement prompt engineering and retrieval-augmented generation (RAG) for enterprise applications.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Requirements (JAVA Full Stack) 7 - 11 Years Java Full stack: (MUST SKILL) Must to have skills · Hands on with Java 8 plus versions– · Good Working knowledge and Web Framework experience with Spring MVC, JPA , Spring Boot, Hibernate and Micro services. · Strong hands experience on one of the JS frameworks – Angular 8+ (or )React JS · Hands on experience with Relational (PL-SQL/Oracle/SQL Server) and NOSQL databases (Mongo DB / Cassandra) - · Hands on with Rest based web services. Working experience in developing web services using HTTP REST/JSON . · Strong coding skills , unit testing with experience in JUnit/Spock/Groovy · Knowledge on Agile (Scrum, Kanban) · Strong exposure to Design Patterns (IOC, MVC, Singleton, Factory) – · Expertise on Kafka or Rabbit MQ or MQ series · Experience in CI-CD pipeline using Jenkins, Kubernetes, Dockers and any cloud (AWS / GCP / Azure ) · Comprehensive knowledge of Web design patterns . · Possess excellent written and verbal communication skills. Good to Have · Continuous Testing - TDD, LeanFT, Cucumber, Gherkin, Jboss · Experience of Code Quality Tools like Sonar, Check style, find bug will be plus Evaluation Topics and Weight (Questions and evaluation are based on below) Java 8 (Spring, Hibernate, MVC) 15% Spring MVC, Spring Boot, Micro services 25 % Angular 8 + Or React JS 20% Relational (PL-SQL/Oracle/SQL Server) 5% NOSQL databases (Mongo DB / Cassandra ) 5% Rest based web services 10% Kafka, MQ 5% Design Patterns (IOC, MVC, Singleton, Factory) 10% CI-CD pipeline using Jenkins, Kubernetes, Dockers , Cloud 5% Communication (Should be 3.5 & above out of 5)

Posted 1 week ago

Apply

15.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Job Title: Project Manager I Senior Technical SaaS & Enterprise Architecture (Virtual CTO) Location: Jaipur (Flexible / Remote / Global Travel as Required) Experience: 9 – 15+ Years Industry Domains: GovTech, EdTech, FinTech, InsurTech, Manufacturing, HealthTech, B2B Commerce, AI/ML, Cloud-Native Platforms About the Role: We are looking for a technically hands-on, vision-driven Senior Technical SaaS & Enterprise Architect (Virtual CTO) to drive architectural excellence, digital transformation, and platform innovation for cloud-native SaaS ecosystems. You will be instrumental in designing scalable multi-tenant architectures, building engineering roadmaps, implementing AI/ML modules, and ensuring compliance across critical industries. This role is ideal for a SaaS technology leader with deep domain knowledge, cloud-native expertise, and a demonstrated ability to scale enterprise-grade products globally. Core Responsibilities: 🔧 Platform Architecture & Engineering Leadership Design and implement multi-tenant, event-driven SaaS architectures optimized for scalability, high availability (HA), and global failover. Architect and lead cloud-native deployments using AWS (EKS, Lambda, S3, RDS), Azure, and GCP, integrating CI/CD, service meshes, and service discovery. Adopt containerized microservices with robust orchestration via Kubernetes and advanced ingress/load balancing (NGINX, Istio). ☁️ Cloud & DevOps Strategy Lead DevOps culture and tooling strategy including Docker, Kubernetes, Terraform, Helm, ArgoCD, and GitHub Actions. Implement observability-first platforms leveraging ELK stack, Prometheus/Grafana, OpenTelemetry, and Datadog. Execute cloud cost optimization initiatives to reduce TCO and increase ROI using tagging, FinOps practices, and usage analytics. 🔐 Enterprise Security & Compliance Establish and enforce end-to-end security protocols including IAM, encryption at rest/in-transit, WAFs, API rate limiting, and secure CI/CD pipelines. Ensure compliance with regulatory frameworks (SOC2, HIPAA, ISO 27001, GDPR), performing regular audits and managing access control with OAuth2, SAML, SCIM. 📊 Data, AI/ML & Intelligence Systems Architect streaming data pipelines using Kafka, AWS Kinesis, and Redis Streams for real-time event processing. Develop and embed ML models (using BERT, SpaCy, TensorFlow, SageMaker) for predictive analytics, fraud detection, or personalization. Integrate AI features such as NLP-driven chatbots, recommendation engines, computer vision modules, and OCR workflows into the product stack. 🔗 System Integration & Interoperability Lead third-party integrations including eKYC, payment gateways (Stripe, Razorpay), ERP/CRM systems (SAP, Salesforce), and messaging protocols. Leverage REST, GraphQL, and gRPC APIs with robust versioning and schema validation via OpenAPI/Swagger. 🌍 Team Scaling & Product Lifecycle Build and manage cross-geo engineering pods with Agile best practices (Scrum, SAFe) using tools like Jira, Confluence, Notion, and ClickUp. Enable internationalization and localization: multi-language (i18n), multi-region deployments, and edge caching with Cloudflare/CDNs. Collaborate with Product, Compliance, and Marketing to align tech strategy with go-to-market, growth, and monetization initiatives. Required Skills & Expertise: Architecture: Cloud-native, multi-tenant SaaS, microservices, event-driven systems Languages: Java, Python, Go, Kotlin, TypeScript Frameworks: Spring Boot, Node.js, .NET Core, React, Angular, Vue.js, PHP Cloud & Infra: AWS (EKS, Lambda, RDS), Azure, GCP, Terraform, Docker, Kubernetes Security & Compliance: SOC2, ISO 27001, GDPR, HIPAA, OAuth2, SAML, WAF DevOps: CI/CD, ArgoCD, GitHub Actions, Helm, observability tools (Grafana, ELK, Datadog) AI/ML: NLP (BERT, SpaCy), Vision (AWS Rekognition), Predictive Models Messaging & DB: Kafka, RabbitMQ, PostgreSQL, DynamoDB, MongoDB, Redis Education & Certifications: Bachelor of Engineering (Computer Science) Executive Program in Digital Transformation Strategy – London Business School Postgraduate Program in AI/ML – University of Texas, Austin AWS Certified Solutions Architect – Professional Certified Kubernetes Administrator (CKA) ISO 27001 Lead Implementer Certified Scrum Professional (CSP) Key Achievements: Designed and launched 25+ global SaaS platforms, generating $200M+ in ARR. Built and led distributed tech teams across 7+ countries, delivering full SDLC ownership. Drove $80M+ in VC/PE funding through technical validation and due diligence support. Winner of Top 50 CTOs in APAC – 2024, SaaS Product Leader of the Year – 2022. Delivered award-winning solutions across GovTech (CityGridGov), EdTech (EduFlick), InsurTech (VeriSurance), and Manufacturing (ForgePulse). Ideal Candidate Traits: Technical evangelist with a deep product mindset and business acumen. Equally effective in the boardroom and war room—comfortable leading strategy and code reviews. Passionate about developer experience, automation, and scalable, future-proof architectures. Track record of zero-to-one and one-to-N product evolution in high-growth environments. Engagement Models: Full-Time / Part-Time Chief Technology Officer (CTO) Fractional CTO for funded startups or transformation projects SaaS Modernization Consultant for legacy to cloud-native migration Technical Due Diligence Advisor for VCs, M&A, and institutional investors Compliance Strategy Leader: SOC2, ISO, HIPAA implementations We invite you to follow our LinkedIn page for updates on this and other exciting opportunities: Kuchoriya TechSoft LinkedIn. If you're interested in applying, please share your updated CV along with your previous company experience letter at alexmartinitexpert@gmail.com . We look forward to hearing from you!

Posted 1 week ago

Apply

0 years

15 - 36 Lacs

Hyderabad, Telangana, India

On-site

Company: Space Inventive Website: Visit Website Business Type: Small/Medium Business Company Type: Product & Service Business Model: B2B Funding Stage: Series C Industry: Software Development Salary Range: ₹ 15-36 Lacs PA Job Description Space Inventive is a global digital transformation company helping enterprises build human-first, AI-powered products. With a footprint spanning India and the US, we specialize in AI/ML, UI/UX, cloud-native platforms, and enterprise-scale software development. Our mission is to deliver agile innovation at scale - with clients across healthcare, banking, manufacturing, and beyond. What You’ll Do As a Python Developer, you’ll be instrumental in building scalable, AI-integrated web applications. You’ll collaborate with cross-functional teams, including ML engineers and UI developers, to deliver secure, performant, and maintainable solutions. Key Responsibilities Design, develop, and maintain the Web applications using Python and React.JS. Integrate the front-end with backend services using Python and Fast API Gathering functional requirements, developing technical specifications and project & test planning Designing/developing software prototypes, or proof of concepts Act in a technical leadership capacity: applying technical expertise to challenging programming and design problems. Coordinate closely with ML Engineers to integrate machine learning models and ensureseamless functionality Perform DevOps role in managing build to operate lifecycle of the solutions that we develop. Contribute to the design and architecture of the project. Qualifications Design, develop and maintain the server-side of web applications. Experience in Python development, with a strong understanding of Python web frameworks such as Django or Flask. Solid understanding of machine learning algorithms, techniques, and libraries, such as TensorFlow, PyTorch, scikit-learn, or Keras. Have working knowledge of AWS services like S3, Lambda. Experience with cloud platforms and services, such as AWS, Azure, or Google Cloud Platform, including cloud-native development, deployment, and monitoring. Experience with relational and NoSQL databases, such as MySQL, PostgreSQL, MongoDB, or Cassandra. Familiarity with DevOps practices and tools, such as Docker, Kubernetes, Jenkins, Git, and CI/CD pipelines Why Join Us At Space Inventive, you’ll be part of a global team building AI-first products that create real-world impact across industries like healthcare, banking, and manufacturing. We offer a collaborative, people-centric culture where innovation, speed, and continuous learning are at the core of everything we do. With opportunities to work on cutting-edge technologies, flexible work environments, and a focus on career growth, Space Inventive is where passionate engineers thrive and make a difference.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: AI Architect Location: Bangalore & Hyderabad Mandatory Skill sAgentic Framework (Langgraph/Autogen/CrewAI )Prometheus/Grafana/ELK Stac kMachine Learning /Deep Learning Frameworks (TensorFlow/PyTorch/Keras )Hugging Face Transformer sCloud computing platforms (AWS/Azure/Google Cloud Platform )DevOps /MLOps /LLM Op sDocker, Kubernetes, DevOps tools like Jenkins and GitLab CI/CD .Fine Tuning of LLMs or SLMs (PALM2, GPT4, LLAMA etc )Terraform or CloudFormatio n 1. Work on the Implementation and Solution delivery of the AI applications leading the team across onshore/offshore and should be able to cross-collaborate across all the AI stream s .2. Design end-to-end AI applications, ensuring integration across multiple commercial and open source tool s.3. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy. Work along with data engineering teams to ensure smooth data flows, quality, and governance across data source s.4. Lead the design and implementations of reference architectures, roadmaps, and best practices for AI application s.5. Fast adaptability with the emerging technologies and methodologies, recommending proven innovation s.6. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring system s.7. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performanc e.8. Ensure the implementation supports scalability, reliability, maintainability, and security best practice s.9. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risk s.10. Oversee the lifecycle of AI application development—from design to development, testing, deployment, and optimizatio n.11. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigatio n.12. Provide mentorship to engineering teams and foster a culture of continuous learnin g.13. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practice s.Educati on :- Bachelor’s/Master’s degree in Computer Scien ce- Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to hav e) Required Skil ls:• The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and Crew AI.• Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Ker as.• Understanding of Deep learning and NLP algorithms – RNN, CNN, LSTM, transformers architecture e tc.• Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutio ns.• Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/ CD.• Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deploymen ts.• Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured da ta.• Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communicatio ns.

Posted 1 week ago

Apply

13.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at a scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come i Job Description REQUIREMENTS: Total experience 13+years. Hands on working experience in data science with a focus on predictive modeling and optimization. Strong experience with Python (and/or R) and libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch. Strong knowledge of Cloud Architecture and deployment of data solutions in cloud ecosystems (e.g., AWS, Azure, GCP). Proven expertise in machine learning, mathematical optimization (e.g., LP, IP, genetic algorithms, RL), and NLP techniques. Familiarity with Generative AI fundamentals and hands-on experience with RAG, LangChain, LlamaIndex, and prompt engineering. Strong understanding of MLOps principles and tools for CI/CD, monitoring, and model lifecycle management. Experience in Reinforcement Learning, Ant Colony Optimization, or other advanced AI methodologies. Familiarity with containerization tools (e.g., Docker, Kubernetes) for model deployment. Hands-on experience with version control, MLflow, or similar experiment tracking tools. Strong interpersonal and communication skills to interact with business and technical teams effectively. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 week ago

Apply

6.0 - 3.0 years

9 - 20 Lacs

Ajmer Road, Jaipur, Rajasthan

On-site

Job Description As a Senior Software Developer, you will play a crucial role in developing, maintaining, and optimizing complex web applications and microservices. You will be responsible for both the front-end and back-end aspects of our platform, leveraging your expertise in modern technologies like Nest.js , Next.js , and microservices architecture . This is a hands-on technical role where you will work closely with other developers, product managers, and stakeholders to deliver world-class software solutions. You will have the opportunity to lead teams, mentor junior developers, and contribute to architectural decisions. Key Responsibilities Design, develop, and maintain scalable microservices-based applications using Nest.js (Backend) and Next.js (Frontend). Architect and implement high-quality, well-tested, and optimized code in alignment with project requirements. Collaborate with cross-functional teams to understand business requirements and implement solutions in an agile environment. Ensure smooth integration between different services and components of the platform. Troubleshoot and resolve complex issues related to performance, scalability, and code quality. Mentor and guide junior developers, promoting best practices in software design and coding. Actively participate in code reviews, sprint planning, and product release cycles. Develop, maintain, and improve CI/CD pipelines for efficient deployment. Stay up-to-date with the latest industry trends and technologies. Required Skills & Qualifications Minimum 6 years of professional experience in software development, with a focus on backend technologies and microservices. Strong expertise in Nest.js for building scalable and maintainable server-side applications. Proficiency in Next.js for building server-rendered React applications. Deep understanding of Microservices Architecture and experience with designing and implementing distributed systems. Strong experience with databases (SQL and NoSQL), caching strategies, and API design. Excellent proficiency in TypeScript and JavaScript. Experience with modern front-end technologies (HTML5, CSS3, and JavaScript frameworks). Experience with cloud platforms such as AWS, GCP, or Azure. Strong understanding of software architecture, patterns, and best practices. Hands-on experience with version control systems (e.g., Git). Experience with CI/CD pipelines, automated testing, and deployment processes. Excellent problem-solving skills and the ability to troubleshoot and debug complex issues. Nice to Have Experience with Docker, Kubernetes, and container orchestration. Familiarity with GraphQL, Redis, or other caching tools. Exposure to modern frontend libraries and tools (e.g., Redux, Tailwind CSS). Knowledge of security best practices in web applications and APIs. Familiarity with Agile and Scrum methodologies. Job Type: Full-time Pay: ₹900,000.00 - ₹2,000,000.00 per year Benefits: Provident Fund Application Question(s): Please mention your current CTC for example your current CTC is 6 LPA mention 6 : Please mention your expected CTC for example your expected CTC is 7 LPA mention 7 : Experience: Node.js: 6 years (Required) Nest.js: 5 years (Required) Next.Js: 4 years (Required) TypeScript: 5 years (Required) AWS - Lambda: 3 years (Required) Location: Ajmer Road, Jaipur, Rajasthan (Required) Work Location: In person

Posted 1 week ago

Apply

14.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Job Title: Senior Technical SaaS & Enterprise Architecture Location: Jaipur (Flexible / Remote / Global Travel as Required) Experience Required: 10 - 14+ Years Industry Domains: GovTech, EdTech, FinTech, InsurTech, Manufacturing, HealthTech, B2B Commerce, AI, etc. Salary : TBD About the Role: We are seeking a visionary and execution-focused Senior Technical SaaS & Enterprise Architecture [Virtual Chief Technology Officer (CTO)] to lead the end-to-end technology strategy, architecture, and innovation for scalable, cloud-native SaaS platforms. This role is ideal for a dynamic leader with a proven track record of delivering high-impact digital transformation, guiding startups through funding rounds, and driving sustainable growth via secure, compliant, and AI-driven technology ecosystems. Key Responsibilities: Architect and deliver multi-tenant SaaS platforms using cloud-native solutions (AWS, Azure, GCP). Lead cross-functional engineering teams across geographies and mentor senior tech leaders. Drive adoption of microservices, Kubernetes, DevOps, and AI/ML into modern SaaS stacks. Define and execute enterprise security strategies (SOC2, GDPR, HIPAA, ISO 27001). Serve as technical face to investors , participating in due diligence, fundraising, and M&A processes. Build data engineering pipelines , streaming architectures, and scalable analytics systems. Collaborate with product, compliance, and business teams to align tech roadmaps with business goals. Oversee cost optimization , infrastructure modernization, and technical debt reduction initiatives. Enable international product rollouts including multi-language, multi-region deployments. Lead innovation in AI/ML , IoT, and predictive systems across various industry-specific SaaS offerings. Required Skills & Expertise: Architecture: Cloud-native, multi-tenant SaaS, microservices, event-driven systems Languages: Java, Python, Go, Kotlin, TypeScript Frameworks: Spring Boot, Node.js, .NET Core, React, Angular, Vue.js, PHP Cloud & Infra: AWS (EKS, Lambda, RDS), Azure, GCP, Terraform, Docker, Kubernetes Security & Compliance: SOC2, ISO 27001, GDPR, HIPAA, OAuth2, SAML, WAF DevOps: CI/CD, ArgoCD, GitHub Actions, Helm, observability tools (Grafana, ELK, Datadog) AI/ML: NLP (BERT, SpaCy), Vision (AWS Rekognition), Predictive Models Messaging & DB: Kafka, RabbitMQ, PostgreSQL, DynamoDB, MongoDB, Redis Education & Certifications: B.E. in Computer Science Executive Program in Digital Transformation Postgraduate Program in AI/ML AWS Certified Solutions Architect – Professional Scrum Alliance Key Achievements: Launched SaaS products Led teams across 7+ countries ; helped raise funding Delivered platforms in GovTech, EdTech, FinTech, InsurTech, Manufacturing Ideal Candidate Profile: Deep product mindset with ability to translate vision into architecture Hands-on and strategic; excels in startup and scale-up environments Excellent communicator with board-level presence Strong understanding of business impact through technology Need Immediate joiner (Mention Joining date, passport number, all education diploma & degree, certifications) Engagement Types: Full-Time CTO / Fractional CTO / Interim CTO SaaS Modernization Consultant Tech Due Diligence Partner for Startups and Investors Follow our page for more update for this openings: https://www.linkedin.com/company/kuchoriyatechsoft/?viewAsMember=true and submit your updated CV, previous company experience letter at alexmartinitexpert@gmail.com

Posted 1 week ago

Apply

7.0 - 9.0 years

6 - 10 Lacs

Hyderābād

On-site

General information Country India State Telangana City Hyderabad Job ID 45594 Department Development Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type On-site Description & Requirements As a Senior DevOps Engineer, you will be responsible for leading the design, development, and operationalization of cloud infrastructure and CI/CD processes. You will serve as a subject matter expert (SME) for Kubernetes, AWS infrastructure, Terraform automation, and DevSecOps practices. This role also includes mentoring DevOps engineers, contributing to architecture decisions, and partnering with cross-functional engineering teams to implement best-in-class cloud and deployment solutions. Essential Duties: Design, architect, and automate cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform and CloudFormation. Lead and optimize Kubernetes-based deployments, including Helm chart management, autoscaling, and custom controller integrations. Implement and manage CI/CD pipelines for microservices and serverless applications using Jenkins, GitLab, or similar tools. Champion DevSecOps principles, integrating security scanning (SAST/DAST) and policy enforcement into the pipeline. Collaborate with architects and application teams to build resilient and scalable infrastructure solutions across AWS services (EC2, VPC, Lambda, EKS, S3, IAM, etc.). Establish and maintain monitoring, alerting, and logging practices using tools like Prometheus, Grafana, CloudWatch, ELK, or Datadog. Drive cost optimization, environment standardization, and governance across cloud environments. Mentor junior DevOps engineers and participate in technical reviews, playbook creation, and incident postmortems. Develop self-service infrastructure provisioning tools and contribute to internal DevOps tooling. Actively participate in architecture design reviews, cloud governance, and capacity planning efforts. Basic Qualifications: 7–9 years of hands-on experience in DevOps, Cloud Infrastructure, or SRE roles. Strong expertise in AWS cloud architecture and automation using Terraform or similar IaC tools. Solid knowledge of Kubernetes, including experience managing EKS clusters, Helm, and custom resources. Deep experience in Linux administration, networking, and security hardening. Advanced experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.). Proficient in scripting with Bash, Groovy, or Python. Strong understanding of containerization using Docker and orchestration strategies. Experience with monitoring and logging stacks like ELK, Prometheus, and CloudWatch. Familiarity with security, identity management, and cloud compliance frameworks. Excellent troubleshooting skills and a proactive approach to system reliability and resilience. Strong interpersonal skills and ability to work cross-functionally. Bachelor’s degree in Computer Science, Information Systems, or equivalent. Preferred Qualifications: Experience with GitOps using ArgoCD or FluxCD. Knowledge of multi-account AWS architecture, VPC peering, and Service Mesh. Exposure to DataOps, platform engineering, or large-scale data pipelines. Familiarity with Serverless Framework, API Gateway, and event-driven designs. Certifications such as AWS DevOps Engineer – Professional, CKA/CKAD, or equivalent. Experience in regulated environments (e.g., SOC2, ISO27001, GDPR, HIPAA). About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Hyderābād

On-site

Experience level: 10+ years of industry experience working as BSS/BRM Migration Consultant Hands on – Mandatory Responsibilities: Working Knowledge of all the BRM Data migration components. Well verse with BRM 12 Schema. Strong understanding of Data model and Legacy data mapping. Strong in data conversion techniques and experience in handling the encrypted data. Hands on with data loading and integrating with North/South bound systems. Able to develop the Migration strategy and implementation plan. Must have worked as a BRM developer in their past and must be hands-on in BRM to verify the sanity of the Data migration - experience in development required and not support/operations work. Strong in post-data migration analysis, such as events/invoices/Open items/ Bills & Dunning. Able to develop scripts to reconcile the migrated data. Strong in running parallel bill runs/Dry runs. Able to handle the performance tests related to migration to optimize the downtime. Mandatory Skills: Ability to execute the data migration and validations. Ability to develop Migration strategy documents and techniques. Execute data integrity testing post-migration. Strong programming skills and knowledge on Java technologies. Experience in C/C++, Oracle 12c/19c, PL / SQL, PCM Java, BRM Webservice, Scripting language (perl/python). Hands-on experience with Migration tools like CMT& Etc. Ability to develop and drive cutover runbook. Ability to produce migration reports periodically with detailed analysis of migrated data. Create reports using bursting queries and regular sql queries. Strong knowledge on Kubernetes. Willingness to Travel Working Knowledge of all the BRM Data migration components. Well verse with BRM 12 Schema. Strong understanding of Data model and Legacy data mapping. Strong in data conversion techniques and experience in handling the encrypted data. Hands on with data loading and integrating with North/South bound systems. Able to develop the Migration strategy and implementation plan. Must have worked as a BRM developer in their past and must be hands-on in BRM to verify the sanity of the Data migration - experience in development required and not support/operations work. Strong in post-data migration analysis, such as events/invoices/Open items/ Bills & Dunning. Able to develop scripts to reconcile the migrated data. Strong in running parallel bill runs/Dry runs. Able to handle the performance tests related to migration to optimize the downtime.

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Hyderābād

On-site

Experience level: 10+ years of industry experience working as BSS/BRM Migration Consultant Hands on – Mandatory Responsibilities: Working Knowledge of all the BRM Data migration components. Well verse with BRM 12 Schema. Strong understanding of Data model and Legacy data mapping. Strong in data conversion techniques and experience in handling the encrypted data. Hands on with data loading and integrating with North/South bound systems. Able to develop the Migration strategy and implementation plan. Must have worked as a BRM developer in their past and must be hands-on in BRM to verify the sanity of the Data migration - experience in development required and not support/operations work. Strong in post-data migration analysis, such as events/invoices/Open items/ Bills & Dunning. Able to develop scripts to reconcile the migrated data. Strong in running parallel bill runs/Dry runs. Able to handle the performance tests related to migration to optimize the downtime. Mandatory Skills: Ability to execute the data migration and validations. Ability to develop Migration strategy documents and techniques. Execute data integrity testing post-migration. Strong programming skills and knowledge on Java technologies. Experience in C/C++, Oracle 12c/19c, PL / SQL, PCM Java, BRM Webservice, Scripting language (perl/python). Hands-on experience with Migration tools like CMT& Etc. Ability to develop and drive cutover runbook. Ability to produce migration reports periodically with detailed analysis of migrated data. Create reports using bursting queries and regular sql queries. Strong knowledge on Kubernetes. Willingness to Travel Working Knowledge of all the BRM Data migration components. Well verse with BRM 12 Schema. Strong understanding of Data model and Legacy data mapping. Strong in data conversion techniques and experience in handling the encrypted data. Hands on with data loading and integrating with North/South bound systems. Able to develop the Migration strategy and implementation plan. Must have worked as a BRM developer in their past and must be hands-on in BRM to verify the sanity of the Data migration - experience in development required and not support/operations work. Strong in post-data migration analysis, such as events/invoices/Open items/ Bills & Dunning. Able to develop scripts to reconcile the migrated data. Strong in running parallel bill runs/Dry runs. Able to handle the performance tests related to migration to optimize the downtime.

Posted 1 week ago

Apply

12.0 years

9 - 10 Lacs

Gurgaon

On-site

About the Role: OSTTRA India The Role: Associate Director Software Engineer The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: We are looking for highly motivated technology professionals who will strengthen our specialisms, and champion our uniqueness to create a company that is collaborative, respectful, and inclusive to all. You will have 12+ years’ experience of Java development to meet the needs of our expanding portfolio of Financial Services clients. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally . Responsibilities: Designing, developing, and maintaining high-performance Java applications for post-trade operations, with a focus on scalability and reliability. Utilizing cloud-native technologies and distributed systems to create scalable and resilient solutions. Collaborating with cross-functional teams to analyse requirements and architect innovative solutions for post-trade processes. Implementing efficient and concurrent processing mechanisms to handle high volumes of trade data. Optimizing code and database performance to ensure smooth and responsive post-trade operations. Deploying applications using containerization technologies like Docker and orchestration tools like Kubernetes. Leveraging distributed technologies to build robust and event-driven post-trade systems. Implementing fault-tolerant strategies and resilience patterns to ensure uninterrupted executions. Building resilient, scalable microservices leveraging Spring Boot with Kafka for event-driven architectures. Participating in code reviews, providing constructive feedback, and mentoring junior developers to foster a collaborative and growth-oriented environment. Staying up-to-date with emerging technologies, industry trends, and best practices in cloud-native development, distributed systems, and concurrency. What We’re Looking For: Bachelor's degree in Computer Science, Engineering, or a related field. Strong 12+ years of experience in Java development, with a minimum of 3 years in post-trade operations. Proven expertise in designing and developing scalable Java applications, leveraging cloud-native technologies. In-depth knowledge of distributed systems, event-driven architectures, and messaging frameworks. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Solid understanding of concurrent programming concepts, multithreading, and parallel processing. Familiarity with relational and NoSQL databases, and optimizing database performance for scalability. Strong problem-solving skills and ability to analyse and resolve complex issues in a timely manner. Excellent communication and collaboration skills, with a track record of working effectively in cross-functional teams. Experience with Agile methodologies and continuous integration/continuous deployment (CI/CD) practices is a plus. The Location: Gurgaon, India About Company Statement: OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com . What’s In It For You? Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 317564 Posted On: 2025-08-02 Location: Gurgaon, Haryana, India

Posted 1 week ago

Apply

7.0 years

9 - 10 Lacs

Gurgaon

On-site

About the Role: OSTTRA India The Role: Senior Software Engineer The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: We are seeking a Java Developer having 7 to 12 years of experience with deep expertise in Core Java, strong system design and development skills, solid hands-on experience with databases, and a proactive mindset toward code quality and mentoring. This role demands a self-motivated individual who can not only write efficient and scalable code but also guide junior developers through peer reviews, architecture discussions, and best practices. Responsibilities: Design, develop, and enhance complex Java applications and services, , requiring high throughput. Lead technical solutions end-to-end from design through implementation and deployment. Perform detailed code reviews, ensure code quality and adherence to best practices. Collaborate with architects and senior stakeholders to shape system design and architecture. Analyze and troubleshoot performance bottlenecks across code and database. Mentor and support junior team members in coding, debugging, and technical issues. Work closely with QA, DevOps, and product teams to deliver high-quality software. What We’re Looking For: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Core Technical Skills: Expert-level Core Java (OOP, Collections, Concurrency, Java 8+ features). Strong experience with Spring Boot, Spring MVC, and RESTful APIs. In-depth knowledge of SQL and RDBMS (PostgreSQL, MySQL, Oracle) with strong database design and query tuning skills. Good knowledge of database design, query optimization, indexing, and stored procedures. Experience with ORM frameworks like Hibernate or JPA. Solid experience with Git, build tools (Maven/Gradle), and logging frameworks (Log4j/SLF4J). Familiar with unit testing frameworks (JUnit/TestNG) and mocking tools (Mockito). Nice-to-Have: Exposure to microservices architecture and distributed systems. Familiarity with NoSQL databases (e.g., MongoDB, Redis) is a plus. Experience with cloud platforms like AWS, Azure, or GCP. Understanding of CI/CD pipelines, Docker, and Kubernetes. Exposure to UI Development React / Angular Exposure to React / Angular UI Soft Skills: Strong analytical and problem-solving abilities. Excellent communication and leadership qualities. Ability to handle peer collaboration, feedback, and conflict resolution constructively. Attention to detail and a commitment to software craftsmanship. The Location: Gurgaon, India About Company Statement: OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com . What’s In It For You? Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 306844 Posted On: 2025-08-02 Location: Gurgaon, Haryana, India

Posted 1 week ago

Apply

5.0 years

7 - 8 Lacs

Mumbai

On-site

Job Title: PHP Developer Location: Mumbai Experience: 5+ years Immediate Joiners Role & Responsibilities: Develop and maintain robust, scalable web applications using the Laravel framework. Collaborate with cross-functional teams to define, design, and deliver new features. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Help maintain code quality, organization, and automation. Write clean, secure, and efficient code while following best practices. Build and maintain APIs and third-party integrations. Troubleshoot and debug application issues in development and production environments. Manage and optimize MySQL databases and queries. Participate in daily stand-ups, code reviews, and agile ceremonies. Required Skills: Strong experience in developing web applications using Laravel (v8 or higher). Solid understanding of MVC architecture and OOP principles. Experience working with MySQL/PostgreSQL, including database design and query optimization. Hands-on knowledge of Eloquent ORM, migrations, seeders, and factories. Proficiency in PHP, HTML, CSS, JavaScript, and frontend libraries (like Bootstrap). Familiarity with RESTful APIs and API security best practices. Experience with version control systems such as Git. Understanding of application hosting, deployment, and CI/CD pipelines. Good analytical, problem-solving, and communication skills. Ability to work independently and take ownership of projects. Preferred Skills: Experience with Vue.js, React.js, or other JavaScript frameworks. Knowledge of Docker, Kubernetes, or similar tools for containerization. Familiarity with AWS, Azure, or other cloud service providers. Basic understanding of Linux server administration. Exposure to Agile/Scrum development methodologies. Interested candidates can share their resumes at hr@wamatechnology.com Job Types: Full-time, Permanent Pay: ₹60,000.00 - ₹70,000.00 per month Application Question(s): Where do you reside? If selected, how soon can you join? What is your salary expectation per month? Experience: Total: 5 years (Required) PHP: 4 years (Required) Laravel: 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

2.0 years

4 - 8 Lacs

India

On-site

Open Position: DevOps & Cloud Engineer Location: Malad (West), Mumbai Experience: 2 to 6 Years Qualification: Any Graduate Industry: IT/Software Please call the number for more details - +91 9372974661 / +918928772622 Only Mumbai Suburban based location (Bhayandar to Churchgate) Candidates apply for the Position We are working 6 days i.e. From Monday to Saturday Job Overview: We are looking for a skilled and motivated Cloud & DevOps Engineer to join our dynamic IT team. As a Cloud & DevOps Engineer, you will work with cloud-based infrastructure, automation tools, and CI/CD pipelines to deliver scalable, reliable, and secure software solutions. You will collaborate closely with development teams to streamline deployment processes, enhance cloud infrastructure, and improve overall system performance. This is a fantastic opportunity for someone with a passion for technology, continuous integration, and cloud solutions. Key Responsibilities: Cloud Infrastructure Management: Design, implement, and maintain cloud-based infrastructure on platforms such as AWS, Azure, or Google Cloud, ensuring scalability, security, and cost-efficiency. DevOps Automation: Develop, manage, and improve automation scripts and tools for continuous integration (CI) and continuous delivery (CD), ensuring smooth and efficient deployment pipelines. System Monitoring & Optimization: Monitor system performance, troubleshoot issues, and optimize resources. Work on capacity planning and resource scaling to accommodate growing user demands. Collaboration with Development Teams: Work closely with software developers to optimize code deployment processes and streamline release cycles using DevOps best practices and tools. Infrastructure as Code (IaC): Utilize IaC tools such as Terraform, CloudFormation, or Ansible to automate infrastructure provisioning and management. Security & Compliance: Implement and maintain security best practices for cloud environments and DevOps workflows. Monitor and ensure compliance with relevant regulations and industry standards. Backup & Disaster Recovery Planning: Design, implement, and test backup and disaster recovery strategies for cloud environments, ensuring business continuity. Troubleshooting & Support: Troubleshoot complex issues across cloud infrastructure, automation systems, and deployment pipelines. Provide proactive solutions to prevent downtime. Documentation & Knowledge Sharing: Create and maintain technical documentation related to cloud infrastructure, automation processes, and DevOps workflows. Share knowledge with team members and mentor junior engineers. Qualifications: Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Experience: 3+ years of experience in cloud infrastructure management and DevOps engineering. Hands-on experience with cloud platforms (AWS, Azure, or GCP). Strong proficiency in scripting languages (Python, Bash, etc.) and automation tools (Terraform, Ansible, etc.). Experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.) and version control systems (Git). Experience with containerization (Docker, Kubernetes) and orchestration. Technical Skills: Strong knowledge of cloud services and networking (e.g., EC2, S3, VPC, IAM in AWS). Familiarity with monitoring tools (Prometheus, Grafana, CloudWatch, etc.). Experience with databases (SQL/NoSQL) and storage solutions. Understanding of security best practices in cloud environments. Knowledge of Agile methodologies and working in a collaborative environment. Desirable Skills: Certifications in cloud platforms (AWS Certified Solutions Architect, Azure, GCP). Familiarity with serverless architectures and microservices. Experience with infrastructure monitoring, logging, and alerting tools (ELK Stack, Datadog, Splunk, etc.). Knowledge of CI/CD pipelines and versioning strategies. Job Types: Full-time, Permanent Pay: ₹420,000.00 - ₹800,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

5.0 years

3 - 4 Lacs

Bengaluru

On-site

Req ID: 330939 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Senior Full Stack .NET Developer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). How You'll Help Us: As a Senior Full Stack .NET Developer on the Product Platform Engineering initiative, you will establish foundational platform capabilities that enable rapid, reliable software delivery across a distributed cloud ecosystem. Your work will directly impact platform scalability, developer productivity, and the reliability of solutions deployed onto the platform. How We Will Help You: Join our Software Engineering practice focused on cutting-edge platform engineering and cloud-native IoT solutions. You'll work with modern Azure technologies, microservices architecture, and AI-augmented development practices. Following emerging platform engineering patterns, 12-factor application principles, and domain-driven design to build world-class distributed systems. Why the Role Is Important: You'll be instrumental in establishing the Product Platform foundation that enables hundreds of developers to build and deploy mission-critical distributed IoT solutions. Your contributions to reference implementations, engineering best practices, 12-factor patterns, and platform standardization will directly impact how applications are developed, deployed, and scaled across multi-region cloud infrastructure. Once You Are Here, You Will: Establish Product Platform Foundation: Design and implement a highly available, distributed cloud platform architecture using Azure Kubernetes Service (AKS) and multi-region deployment patterns Create Reference Implementations: Build exemplary backend .NET Core microservices and React frontend repositories that serve as templates for domain-driven design and 12-factor application principles Modernize Existing Services: Update and augment 80+ existing repositories and services to adopt standardized 12-factor patterns, improving maintainability and scalability Leverage Domain-Driven Design: Apply DDD principles to create bounded contexts and microservices that align with business domains Build AI-Augmented Development Tools: Integrate AI-powered development assistants and automated code generation capabilities into the platform ecosystem Develop Cloud-Native Solutions: Create containerized microservices using .NET Core, Docker, and Kubernetes with automated CI/CD pipelines Establish Platform Standards: Define and implement reference templates, API contracts, and development patterns that ensure consistency across teams Enable Multi-Region Architecture: Build resilient, fault-tolerant services that operate across multiple Azure regions for high availability and disaster recovery Required Qualifications: 5+ years of hands-on experience with .NET Core/C#, ASP.NET Core, and Entity Framework in microservices architectures 3+ years of experience with Azure cloud services (AKS, Azure SQL Database, Service Bus, Redis Cache, Azure Monitor) 3+ years of React.js development experience building responsive, scalable web applications Expert level proficiency with .NET core, C#, TypeScript, and React Strong understanding of 12-factor application principles and cloud-native development patterns Experience with Domain-Driven Design (DDD) and microservices architecture Proficiency with containerization (Docker) and Kubernetes orchestration Knowledge of API-first development using OpenAPI specifications and contract-driven design Experience with Infrastructure as Code using Terraform and GitOps workflows Bachelor's degree or equivalent combination of education and work experience Able to travel as needed for project assignments (25-50%) Preferred Qualifications: Experience with IoT device integration, telemetry processing, and event-driven architectures Knowledge of high-availability system requirements and resiliency patterns Experience with AI-augmented development tools (GitHub Copilot, etc.) and automated code generation Proficiency with platform engineering concepts and Internal Developer Platforms (IDP) Experience with Azure DevOps, Jenkins, SonarQube, and Grafana Knowledge of chaos engineering, load testing, and resilience patterns Experience with distributed tracing, observability, and performance monitoring Understanding of CQRS, Event Sourcing, and message-driven architectures Experience with automated testing frameworks (NUnit, MSTest, SpecFlow) and TDD practices Familiarity with service mesh technologies and API gateway patterns Ideal Mindset: Platform Engineer: You think in terms of building reusable, scalable foundations that enable other developers to be more productive and deliver higher quality solutions Quality Champion: You prioritize reliability, testability, and maintainability, especially for mission-critical applications where system failures can have serious consequences Continuous Innovator: You stay current with cloud-native patterns, platform engineering trends, and emerging technologies like AI-augmented development Architectural Leader: You can design and implement reference patterns that become the foundation for enterprise-wide development practices Domain Expert: You understand the importance of aligning technical solutions with business domains and user needs What You'll Build: Reference Backend Services: Exemplary .NET Core microservices implementing 12-factor principles, domain-driven design, and cloud-native patterns Reference Frontend Applications: React-based user interfaces that demonstrate modern development practices and platform integration Platform Templates: Golden path templates that enable rapid service creation while ensuring consistency and quality Modernization Frameworks: Tools and patterns for migrating legacy services to cloud-native, 12-factor architectures AI-Enhanced Development Tools: Integrated development experiences that leverage AI for code generation, testing, and documentation About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. We are looking for a Database Specialist with strong focus on databases and data management topics, experienced SAP deployments on IBM DB2/DB6 and SAP HANA best practices, cloud service engineering processes, transformation, and innovation services, with a strong background in technical/functional end-to-end data environment management. This role involves working closely with cross-functional teams to ensure high availability, performance, and security of the DB2 and SAP HANA landscapes within RISE. Key activities will include providing IBM DB2/DB6 and SAP HANA expertise and supporting POCs, architectures, migrations/upgrades (wherever applicable), automation, performance & tuning, on-demand expertise, and optimizing existing processes through automation. What you'll do: Design and architect IBM DB2/DB6 and SAP HANA solutions with a focus on scalability, performance, and security Lead the installation, configuration, and management of DB2/DB6 and HANA environments and define standard storage templates for various machine types Collaborate with infrastructure, network, and security teams to ensure seamless integration of IBM DB2/DB6 and SAP HANA with existing systems and networks Develop and implement best practices for IBM DB2/DB6 & SAP HANA administration, including backup and recovery, system refreshes, patching, and performance tuning Provide technical guidance and expertise in RISE migration projects, including on-premises to cloud transitions and upgrades Monitor and optimize DB2/DB6 & HANA system performance, including database tuning, query optimization, and resource management Conduct capacity planning and sizing for DB2/DB6 & SAP HANA deployments to meet business requirements Implement high availability and disaster recovery strategies for SAP HANA systems Troubleshoot and resolve complex issues related to DB2/DB6/ HANA What you'll bring: Bachelor’s degree in Computer Science, Information Technology, or similar with 10+ years of relevant experience Proven experience as a IBM DB2/DB6 specialist and preferably having strong SAP HANA expertise, preferably in the capacity of an architect Strong expertise in IBM DB2 & SAP HANA installation, configuration, administration, and performance tuning Hands-on experience with IBM DB2/DB6 and SAP HANA database design, development, and optimization especially in the context of SAP applications Solid understanding of cloud architecture, including networking, security, and infrastructure services Experience with high availability, disaster recovery, and backup strategies for DB2/DB6 & HANA and Pacemaker experience Strong Experience in designing, implementing, and onboarding Cloud Services and solutions to the solutions Portfolio Strong analytical and problem-solving skills with the ability to troubleshoot complex technical issues Excellent communication and collaboration skills to work effectively with cross-functional teams. SAP HANA and DB2 certifications are a plus Good to Have: Architect level certification with one or more hyperscalers (AWS, Azure, or GCP). Exposure to operating HANA/DB2 on IBM cloud Power VS & VPC environment and RHEL OS Familiarity with automation tools and scripting languages such as Ansible, Python, or Shell scripting for DB2 and HANA adminsitration Experience with SAP BW, SAP S/4HANA, or other SAP applications on HANA. Knowledge of containerization & orchestration technologies like Kubernetes & Docker in SAP environments. Strong scripting / programming skills will be preferred Familiarity with SAP Analytics Cloud (SAC) and/or SAP Business Technology Platform (BTP) and other SAP SaaS solutions Familiarity with other SAP Database products – SAP ASE, SAP IQ, SAP Replication Server & MS-SQL etc. Familiarity with RISE with SAP offerings Meet the Team: The ECS CAE Data Management team is a key pillar within the Enterprise Cloud Services (ECS) CAE organization is the common theme across all CAE areas for database and data management technologies. Our mission is to establish reliable and efficient data foundations with best-in-class database and data management capabilities for ECS. This team works across all ECS CAE areas for database and data management technologies, providing expertise for operational excellence, analytics and insights, and automation driven intelligent data operations. #SAPECSCareers Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 429837 | Work Area: Information Technology | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies