Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title : Google Cloud DevOps Engineer Location : PAN India The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Benefits of Working Here: Gender-Neutral Policy 18 paid holidays throughout the year for NCR/BLR (22 For Mumbai) Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Learn more about us at www.publicissapient.com or explore other career opportunities here
Posted 3 days ago
3.0 years
0 Lacs
India
On-site
We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)
Posted 3 days ago
6.0 years
18 - 30 Lacs
India
On-site
Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database
Posted 3 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within PWC Responsibilities Job Title: Cloud Engineer (Java 17+, Spring Boot, Microservices, AWS) Job Type: Full-Time Job Overview: As a Cloud Engineer, you will be responsible for developing, deploying, and managing cloud-based applications and services on AWS. You will use your expertise in Java 17+, Spring Boot, and Microservices to build robust and scalable cloud solutions. This role will involve working closely with development teams to ensure seamless cloud integration, optimizing cloud resources, and leveraging AWS tools to ensure high availability, security, and performance. Key Responsibilities: Cloud Infrastructure: Design, build, and deploy cloud-native applications on AWS, utilizing services such as EC2, S3, Lambda, RDS, EKS, API Gateway, and CloudFormation. Backend Development: Develop and maintain backend services and microservices using Java 17+ and Spring Boot, ensuring they are optimized for the cloud environment. Microservices Architecture: Architect and implement microservices-based solutions that are scalable, secure, and resilient, ensuring they align with AWS best practices. CI/CD Pipelines: Set up and manage automated CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline for continuous integration and deployment. AWS Services Integration: Integrate AWS services such as DynamoDB, SQS, SNS, CloudWatch, and Elastic Load Balancing into microservices to improve performance and scalability. Performance Optimization: Monitor and optimize the performance of cloud infrastructure and services, ensuring efficient resource utilization and cost management in AWS. Security: Implement security best practices in cloud applications and services, including IAM roles, VPC configuration, encryption, and authentication mechanisms. Troubleshooting & Support: Provide ongoing support and troubleshooting for cloud-based applications, ensuring uptime, availability, and optimal performance. Collaboration: Work closely with cross-functional teams, including frontend developers, system administrators, and DevOps engineers, to ensure end-to-end solution delivery. Documentation: Document the architecture, implementation, and operations of cloud infrastructure and applications to ensure knowledge sharing and compliance. Required Skills & Qualifications: Strong experience with Java 17+ (latest version) and Spring Boot for backend development. Hands-on experience with AWS Cloud services such as EC2, S3, Lambda, RDS, EKS, API Gateway, DynamoDB, SQS, SNS, and CloudWatch. Proven experience in designing and implementing microservices architectures. Solid understanding of cloud security practices, including IAM, VPC, encryption, and secure cloud-native application development. Experience with CI/CD tools and practices (e.g., Jenkins, GitLab CI, AWS CodePipeline). Familiarity with containerization technologies like Docker, and orchestration tools like Kubernetes. Ability to optimize cloud applications for performance, scalability, and cost-efficiency. Experience with monitoring and logging tools like CloudWatch, ELK Stack, or other AWS-native tools. Knowledge of RESTful APIs and API Gateway for exposing microservices. Solid understanding of version control systems like Git and familiarity with Agile methodologies. Strong problem-solving and troubleshooting skills, with the ability to work in a fast-paced environment. Preferred Skills: AWS certifications, such as AWS Certified Solutions Architect or AWS Certified Developer. Experience with Terraform or AWS CloudFormation for infrastructure as code. Familiarity with Kubernetes and EKS for container orchestration in the cloud. Experience with serverless architectures using AWS Lambda. Knowledge of message queues (e.g., SQS, Kafka) and event-driven architectures. Education & Experience: Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. 7-11 years of experience in software development with a focus on AWS cloud and microservices. Mandatory Skill Sets Cloud Engineer (Java+Springboot+ AWS) Preferred Skill Sets Cloud Engineer (Java+Springboot+ AWS) Years Of Experience Required 7-11 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Cloud Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking a highly experienced Senior Project Manager to lead and deliver critical initiatives focused on Google Cloud Platform (GCP) implementation and migration. The ideal candidate will have a solid background in managing complex IT and cloud infrastructure projects, with hands-on experience overseeing end-to-end GCP deployment. GCP certification is good to have. Key Responsibilities: Lead full lifecycle project management for GCP implementation, including planning, execution, monitoring, and closure. Collaborate with cross-functional teams (engineering, infrastructure, security, DevOps, etc.) to ensure successful cloud migration and adoption. Manage project scope, schedule, cost, quality, resources, and communication across all project phases. Identify and manage project risks, dependencies, and mitigation strategies proactively. Develop and maintain detailed project plans, dashboards, and status reports for stakeholders and leadership. Drive alignment with business and IT leadership to ensure strategic project outcomes. Work closely with GCP architects and engineers to ensure platform configurations and deployments meet business requirements. Ensure adherence to project governance frameworks and compliance requirements. Facilitate change management and communication activities with impacted teams. Qualifications -Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field. -8 - 10 years of experience in project management with a focus on IT and cloud infrastructure projects. -Proven experience managing GCP implementation or migration projects end-to-end. -Strong understanding of cloud architecture and GCP services (Compute Engine, BigQuery, Cloud Storage, IAM, VPC, etc.). -Familiarity with Agile/Scrum, DevOps practices, and CI/CD pipelines. -Proficiency with project management tools like JIRA, MS Project, Smartsheet, Confluence, or similar. -PMP, PMI-ACP, CSM, or equivalent project management certification (Preferred) -Google Cloud Digital Leader (or above) certification (Preferred)
Posted 3 days ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 3 days ago
0.0 - 3.0 years
12 - 20 Lacs
Mumbai, Maharashtra
On-site
Looking for highly skilled AWS DevOps Engineer to design, implement, and manage cloud infrastructure solutions For AI Products. The ideal candidate will have hands-on experience in deploying scalable, secure, and high-performing cloud environments, ensuring alignment with business objectives. Key Responsibilities Design, implement, and manage AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, Route 53, EKS, VPC, and Cloud Formation. Automation & CI/CD: Develop Infrastructure as Code (IaC) with Terraform /Cloud Formation and automate deployments using CI/CD tools like Jenkins, Code Pipeline, or GitHub Actions. Implement best practices for cloud security, compliance (e.g., RBI, SEBI regulations), and data protection (IAM, KMS, Guard Duty). Set up monitoring (Cloud Watch, Cloud Trail) and optimize performance, cost, and resource utilization. Configure and manage networks, VPCs, VPNs, Subnets, and Route Tables to ensure secure and efficient network operations. Work closely with security, and development teams to support product development and deployments. Maintain clear and comprehensive documentation for infrastructure, configurations, and processes. Key Skills: 4+ years of hands-on experience with AWS services and solutions. Candidate should have HANDS ON Experience of Designing, Configuring, Implementing and setting up the environment with the technologies Expertise in Infrastructure as Code (IaC) tools: Terraform, Cloud Formation. Prior Experience in building cloud infra for AI products Strong scripting skills: Python, Bash, or Shell. Experience with containerization and orchestration: Docker, Kubernetes (EKS). Proficiency in CI/CD tools: Jenkins, AWS Code Pipeline, GitHub or bit bucket Actions. Solid understanding of security and compliance in cloud environments. AWS Certifications (preferred) Location: Mumbai (Work from office only) Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Schedule: Day shift Ability to commute/relocate: Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Post selection, can you join immediately? or within 30 days? Experience: AWS DevOps: 3 years (Required) Work Location: In person
Posted 3 days ago
1.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Junior Java Developer Expereince: 1 year Location: Chennai About CloudNow Technologies At CloudNow Technologies , we specialize in delivering advanced IT services and solutions by leveraging cutting-edge technologies across DevOps, Agile, Data Analytics , and Cybersecurity . Our agile work culture emphasizes continuous learning and professional growth. We believe in promoting from within and actively focus on upskilling our talent to take on future leadership roles . We are the creators of Akku – our flagship Identity and Access Management (IAM) platform – a powerful, enterprise-grade solution designed to meet modern security and compliance needs. Akku enables organizations to implement zero-trust architecture , enhance cloud security , and manage access controls effectively across users, devices, and applications. Our strong focus on cybersecurity helps enterprises safeguard their digital environments through advanced access management , multi-factor authentication (MFA) , device and IP restrictions , password policy enforcement , and user lifecycle automation . To know more about us, visit: www.cloudnowtech.com Explore Akku IAM platform: www.akku.work Job Description: The position is for a Junior Java Developer. This role involves doing development involving essential skills of Java with concepts of OOPS, Collections, Multi-Threading, SQL, Spring Core, SQL, etc. Knowledge of working in an Agile Team with DevOps principles would be an additional advantage. This would also involve intensive interaction with the business and other technology groups; hence, strong communication skills and the ability to work under tight deadlines are necessary. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level of ownership within a demanding working environment. Responsibilities: Developing and managing custom integration solutions To work with Agile methodology and environment To create UML class diagrams To perform source code management and versioning To bring together existing systems and focus on the integration of applications Technology Stack: Primary Skills Core Java 1.8 and above OOPS, Multithreading Spring Boot Service Oriented Architecture / Web Services – REST API Hibernate and JPA MYSQL Secondary Skills Spring Framework, SQL, Agile development approach. Markup Languages like XML and JSON JUnit,Smtp Eclipse / IntelliJ, GitLab/Versioning Controlling Tool
Posted 3 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 3 days ago
10.0 years
0 Lacs
India
On-site
We are seeking an experienced and hands-on Solution Architect to lead technical design and architecture for a next-generation, API-first modular banking platform . You will collaborate with client teams, SI partners, and product stakeholders to define scalable, secure, and composable system architectures built on cloud-native principles. Key Responsibilities Lead end-to-end architecture during the discovery and design phases of digital banking transformation projects. Define solution blueprints and integration maps across platforms including Mambu, Salesforce, and nCino . Align business requirements with AWS-native architecture (ECS, Lambda, S3, Glue, Redshift). Design secure, scalable microservices-based solutions using REST, GraphQL , and event-driven frameworks (Kafka). Produce high-level and low-level architecture artefacts , including data flows, API contracts, and deployment diagrams. Recommend and integrate third-party components (KYC/AML, fraud scoring, payment gateways). Collaborate closely with Integration Specialists, DevOps, and UX/Product teams. Required Skills & Experience 10+ years in solution architecture, with at least 4–5 in banking, fintech, or digital platform environments . Proven experience with core banking systems like Mambu, nCino, Salesforce , or equivalents. Hands-on expertise in AWS services (ECS, Lambda, IAM, Glue, Redshift, CloudWatch). Strong understanding of REST/GraphQL APIs , event-driven architecture (Kafka) , and microservices . Familiarity with banking compliance, data privacy , and regulatory frameworks (SOC2, GDPR, PSD2). Excellent communication and stakeholder management skills.
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are Hiring for Cloud Engineers – AWS Specialist at Coforge Ltd. We are seeking a skilled Cloud Engineer with 4–5 years of hands-on experience in AWS, certified as a Cloud Practitioner. The ideal candidate will have strong expertise in infrastructure automation using CloudFormation and Terraform, along with application development experience using AWS Lambda in Java and Python. Experience Required:- 4 to 6 Years. Job Location:- Hyderabad Please share your CV to Gaurav.2.Kumar@coforge.com WhatsApp 9667427662 for any queries. Job Description:- Key Responsibilities: Design and deploy AWS components using CloudFormation (IAM roles, Lambdas, EventBridge, etc.) Build and manage cloud infrastructure using Terraform (S3 setup, permissions, AWS Batch configurations) Develop serverless applications using AWS Lambda in Java and Python Work with AWS Batch for job orchestration and processing Collaborate with cross-functional teams to integrate cloud solutions with business applications Good to Have: Experience with AppFlow and EventBridge (writing event rules) Integration experience with external platforms like Salesforce Qualifications: AWS Cloud Practitioner Certification Strong problem-solving and communication skills Ability to work independently and in a team-oriented environment
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: • Day-to-day delivery of the AWS IAM configuration required to support business requirements, application integrations, workloads, regulatory compliance, and all other platform efforts or deliverables • Evaluates existing AWS IAM permission policies and adjust them as needed to enforce principle of least privilege. • Ensure all AWS IAM resources and configuration adhere to and compliant with all corporate policies/standards, industry best • Participates in Agile team’s ceremonies. • Ensures service requests contain proper approvals and documentation prior to starting the work and deconflicts discrepancies with the CIE Service Management Lead. • Actively (immediately on the same day) coordinates with the CIE Service Management Lead to resolve conflicting requirements or unclear information in Jira stories or service requests. • Ensures security controls are implemented in the CFN templates as required to ensure we maintain a secured Cloud IAM posture. resolution. service requests. • Performs AWS IAM compliance event follow-up and remediation with account owners to resolve event conditions. • Git/Jenkins/BitBucket/JIRA • Engineer and deploy AWS IAM resources including users, groups, roles, and policies using AWS CloudFormation templates and following Cloud IAM team processes and procedures • Actively monitor and respond accordingly to AWS IAM configuration changes, events, and alerts following applicable CIE team process and procedures • Ensure effective security protection controls and hardening requirements are in place for all AWS IAM resources practices/benchmarks, and regulatory requirements • Updates assigned Jira stories daily as required by the Agile team to provide status and next steps. • Creates new or updates existing CFN templates per requirements outlined in the service requests and Agile stories. • Ensures principle of least privilege is implemented in every template policy prior to creating Pull Requests. • Troubleshoots CFN template syntax errors and escalates to the CIE Service Management Lead as needed to ensure same day • Troubleshoots errors logged in in the CFN StackSet’s Stack Instance or Operations tabs as needed to advance the fulfillment of • Follows process documentation to ensure proper governance and request to implementation traceability is in place. • DevOps/IaC/PaC familiarity
Posted 3 days ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
NVIDIA is looking for a passionate member to join our DGX Cloud Engineering Team as a Cloud Software Engineer. In this role, you will play a significant part in helping to craft and guide the future of AI & GPUs in the Cloud. NVIDIA DGX Cloud is a cloud platform tailored for AI tasks, enabling organizations to transition AI projects from development to deployment in the age of intelligent AI. Are you passionate about cloud software development and strive for quality? Do you pride yourself in building cloud-scale software systems? If so, join our team at NVIDIA, where we are dedicated to delivering GPU-powered services around the world! What You'll Be Doing You will play a crucial role in ensuring the success of the DGX Cloud platform by helping to build our development and release processes, creating world-class performance and quality measurement and regression management tools, and maintaining a high standard of excellence in our CI/CD, release engineering tools and processes. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Work closely with other teams on new products or features/improvements of existing products. Develop, maintain and improve CI/CD tools for on-prems and cloud deployment of our software. Collaborate with developers, QA and Product teams to establish, refine and streamline our software release process. Support, maintain, and document software functionality. What We Need To See Demonstrate understanding of cloud design in the areas of virtualization and global infrastructure, distributed systems, and security. Expertise in Kubernetes (K8s) & KubeVirt. Background with building RESTful web services. Experience with Docker and Containers. Experience with Infrastructure as Code. Background with CSPs, for example: AWS (Fargate, EC2, IAM, ECR, EKS, Route53 etc...). Experience with Continuous Integration and Continuous Delivery. Excellent interpersonal and written communication skills required. BS or MS in Computer Science or equivalent program from an accredited University/College. 3+ years of hands-on software engineering or equivalent experience. Ways To Stand Out From The Crowd Expertise in Virtualization technologies such as Firecracker, KVM, OpenStack, Nutanix AHV & Redhat OpenShift. A track record of solving complex problems with elegant solutions. Go & Python/load testing frameworks/ secrets management Demonstrate delivery of complex projects in previous roles. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. JR2000311
Posted 4 days ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About us At R Systems, we are shaping the future of technology by designing cutting-edge software products, platforms, and digital experiences that drive business growth for our clients. Our product mindset and advanced engineering capabilities in Cloud, Data, AI, and Customer Experience empower us to deliver innovative solutions to key players across the high-tech industry. This includes ISVs, SaaS, and Internet companies, as well as leading organizations in telecom, media, healthcare, finance, and manufacturing. We are Great Place to Work® Certified™ in 10 countries where we have a full-time workforce - India, the USA, Canada, Poland, Romania, Moldova, Indonesia, Singapore, Malaysia, and Thailand. This means we are a dynamic, global team that values continuous learning, collaboration, and innovation. Join us and experience a workplace where your contributions are celebrated, and your growth, development, and well-being are at the heart of everything we do!! Exp Range : 9 to 12 Yrs Notice Period : Early joiners are preferred Job Description Information Security Analyst with a strong Security Operation, Incident response/Management, DLP, Forensic/Reverse engineering, Cloud Security & IAM background. You’ll be part of our Security Operations team, which is a major component of our Global Information Security function. As the SecOps SME, you’ll oversee our Multiple Security Solutions like XDR, IAM, Firewall, Email Gateway, SIEM, CASB etc. instance, you’ll work as an InfoSec Analyst in our ASOC and will perform incident response and threat hunting tasks in coordination with our MSSP. Required Qualifications: 9-12 years working in SOC, Incident Response, IAM, DLP, SIEM, Email Gateway, Firewall Minimum of 7 - 9 years of practical information security experience. Experience working with Security Information Event Management (SIEM), Continuous Monitoring, Intrusion Detection/Prevention Systems (ID/PS), Network Traffic Analysis, Incident Response, Endpoint Security Systems, Digital Forensics, WLAN Monitoring, and/or Threat Modeling Expert knowledge of information security technologies, networking, systems, authentication (including MFA) and directory services. Ability to manage complex troubleshooting issues Proven ability to manage competing priorities and work under pressure Ability to contribute to the organizational strategic thinking beyond area of responsibility. CEH, CISM, CHFI, Security+, Network+ or certifications preferred
Posted 4 days ago
15.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Location: Embassy IT Park, Vikhroli, Mumbai Workplace Type: Hybrid (flexible WFH 1-2 days/week) Reporting to: CTO Experience Level: 15+ years About Freespace We’re a workplace technology company helping organizations to achieve three key outcomes: Right size, right design: Enabling informed decisions using real-time data to achieve portfolio optimization and the right workplace design Smart building automations: Streamlining processes by simplifying complex seating requirements and through occupancy driven control and automation Exceptional employee experiences: Maximizing the benefits of the office by providing employees with the tools to find and reserve spaces, connect with each other and enjoy optimal working conditions To achieve these outcomes, we provide an integrated platform that delivers actionable workplace intelligence, through a real-time analytics platform, workplace sensors, employee experience app, signage and space management solutions. We have recently been recognized with a nomination for the IFMA New York Awards of Excellence in the Sustainability category, underscoring their achievements in fostering adaptive, efficient, and sustainable work environments. About Role We are looking for a strong technical leader to own and lead the core Freespace platform and DevOps team. The platform is the highly resilient, fault-tolerant, distributed and secure backbone of Freespace used in various Freespace products with a goal to design for scalability, uniformity and extensibility and ease of developing solutions on it. The role holder will also be responsible for overseeing the end-to-end DevOps lifecycle and ensuring seamless integration of development and operations processes. The ideal candidate will have a strong background in AWS cloud services, a deep understanding of DevOps best practices, and a proven track record of successfully implementing and managing DevOps initiatives within a product-focused environment. Required to work closely with the engineering and product team to deliver scalable, high-quality solutions. collaborate with the product team to plan and deliver the best product with the most efficient use of resources and technologies. You will work with all the stakeholders to assemble project teams, assign responsibilities, identify appropriate resources needed, and develop schedules to ensure the timely completion of projects by meeting project milestones. You will also assess risks, anticipate bottlenecks, provide escalation management, make trade-offs, balance the business needs versus technical constraints and encourage risk-taking behaviour to maximize business benefit. Successful candidates will have a technical background, be detailed driven and have excellent problem-solving abilities. You should not only be passionate about delivering extensible, on-time solutions, but should also be obsessed with contributing to the development of high-performance teams through rigorous goal-setting, disciplined attention to performance metrics, continuous process improvement, and mentorship. The team will work on a diverse technology stack from SOA, UI frameworks, Event-driven, Serverless, Big data on AWS. Key Responsibilities Responsible for the overall development life cycle of the solution and manage complex projects with significant impact. Work with product managers in developing a strategy and road map to provide compelling capabilities for the other product lines that helps them succeed in their business goals. Work closely with the engineering team to develop the best technical design and approach for new capability development. Instil best practices for software development and documentation, assure designs meet requirements, and deliver high-quality work on tight schedules. Project management - prioritization, planning of projects and features, Stakeholder management and tracking of external commitments. Team Leadership Team Management: Lead and manage the DevOps team, providing guidance, support, and mentorship. Skill Development: Encourage continuous learning and skill development among team members to keep up with evolving technologies. Cross-Functional Collaboration: Foster collaboration with other departments, including development, operations, and security teams. Technical Oversight Infrastructure as Code (IaC): Oversee the design and implementation of scalable and automated infrastructure using IaC tools (Terraform, CloudFormation, etc.). CI/CD Pipeline Management: Monitor and optimize CI/CD pipelines to ensure efficient and reliable software delivery. Incident Management: Respond to and resolve incidents, collaborating with the team to address issues in a timely manner. Cloud Management AWS Services: Stay current with AWS cloud services and identify opportunities to leverage new features for improved performance and cost-effectiveness. Cost Management: Monitor and manage cloud costs, identifying ways to optimize resource usage. Stakeholder Communication: Regularly communicate with cross-functional teams and stakeholders, providing updates on key metrics, project statuses, and addressing concerns. Documentation: Maintain clear and comprehensive documentation for configurations, processes, and procedures. Innovation and Technology Evaluation Technology Assessment: Stay informed about emerging technologies and assess their relevance to the organization's DevOps and architectural practices. Innovation: Encourage and support innovation within the DevOps and Platforms team, exploring new tools and methodologies. Training and Onboarding Training Programs: Develop and implement training programs to ensure the team is equipped with the necessary skills. Onboarding: Facilitate the onboarding process for new team members, ensuring a smooth integration into the product and devOps workflow. Key skills & experience 10+ years of Software development experience in AWS, Typescript, React, Java building web applications, services and highly scalable applications. Solid software development background including design patterns, data structures, test-driven development. Managing groups of people to success. Should have experience of managing project, planning to end delivery. 10+ years of experience in a DevOps leadership role, preferably within a product-focused environment. Proven track record of successfully implementing and managing DevOps initiatives within a large organization. In-depth knowledge of AWS cloud services, including EC2, S3, VPC, Lambda, Route53, CloudFront, Athena, Step function, Beanstalk IAM etc. Strong understanding of DevOps principles and best practices, including CI/CD, infrastructure as code, and continuous monitoring. Experience with Git, Docker, Kubernetes, and other relevant DevOps tools. Behaviours and Mindset Demonstrated curiosity to tinker, troubleshoot, research, understand and solve. A keen solution mindset that helps users achieve the best out of a product feature. Clarity of thinking and an ability to explain complex logic and reasoning in simple language. Convincing and confident with technical knowledge, yet humble and inclusive in getting buy-in from clients and partners. Highly organized, able to manage multiple projects simultaneously. Extremely client-focused as well as flexible and agile; able to adapt quickly and responsively to client needs. Freespace Global Perks Paid annual leave and public holidays to support your work-life balance. Paid sick leave if you should fall ill. Private Health insurance & accidental benefit plan Company bonus scheme (for some roles) so you can benefit from the Company’s success. Access to funded training (internal/external) to support your career development Chance to refer friends and earn money through our generous Employee Referral Programme Global employee award program giving us the opportunity to recognize and celebrate success. Chance to get involved in shaping our culture by joining one of our Employee Resource Groups A creative and engaging company culture right across the business. If you are ready to help deliver the next generation of IoT, DevOps and platform solutions, apply now via career page or send your CV to shivani.jindal@afreespace.com & purva@afreespace.com
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You should have 5-8 years of experience and be an Oracle Cloud expert with practical knowledge of OCI migration. Your responsibilities will include designing and implementing migration strategies for transferring on-premises Multitenant Oracle databases to Oracle Exadata Cloud at Customer (ExaCC). You will also need to plan and execute the migration of on-premises Oracle databases to Exadata on OCI. Your expertise will be crucial in designing and implementing migration strategies aimed at minimizing downtime and ensuring data integrity, such as Dataguard, DATA PUMP, and RMAN restore. You must assess existing on-premises infrastructure and applications to determine the most suitable migration approach and utilize Oracle Cloud VMware Solution for migrating virtual machines (VMs) to OCI. In this role, you will perform on-premises data migration to ExaCC while ensuring minimal downtime and data integrity. It will be essential to evaluate and leverage various database migration services available in OCI and collaborate with cross-functional teams to facilitate seamless migration and integration with minimal disruption to business operations. Moreover, you will be responsible for developing and maintaining migration documentation, including architecture diagrams, migration plans, runbooks with timelines, milestones, and resource allocation. Providing technical guidance and support throughout the migration process, conducting risk assessments, and developing mitigation strategies will also be part of your duties. Ensuring compliance with organizational security and data governance policies and implementing CMAN to manage and monitor database migrations will be crucial aspects of the role.,
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Join Adobe’s Security Engineering team as a Software Development Engineer 2 (Security) and help shape the future of security tooling and automation across the organization. In this role, you’ll design and build scalable, secure systems from the ground up—driving real-world impact by enhancing Adobe’s infrastructure and application security through smart automation and streamlined processes. We’re looking for an experienced developer with a strong security mindset—someone who thrives in a fast-paced environment and is passionate about delivering reliable, scalable tools that solve complex security challenges. What You’ll Do Build, test, and maintain scalable and resilient security tools that help mitigate vulnerabilities and automate manual tasks. Integrate and extend third-party security tools to meet Adobe’s unique needs. Collaborate with cross-functional security teams (e.g., Architects, Security Partners) to define, design, and deliver custom solutions. Continuously evaluate emerging security tools and make recommendations based on Adobe’s needs. Provide on-call support for urgent security incidents, including off-hours and weekends. Create clear, detailed documentation to support long-term maintenance and scaling. Contribute to maturing our internal tools, best practices, and engineering processes. What You Need To Succeed Bachelor’s or Master’s degree in Computer Science, Cybersecurity, or a related technical field. 3+ years of experience developing security-focused tools or software. Solid understanding of modern security principles, practices, and tools. Hands-on experience with AWS security services (e.g., IAM, GuardDuty, CloudTrail); experience with Azure or GCP is a plus. Proficiency in at least one modern programming language—Python preferred. Familiarity with web development frameworks such as AngularJS or ReactJS. Strong grasp of DevSecOps practices and experience building/supporting containerized systems. Comfortable working in Linux environments, including scripting, troubleshooting, and system administration. Experience with modern infrastructure tools like Docker, Kubernetes, Terraform, and CloudFormation. Strong problem-solving skills and ability to debug complex systems. Excellent verbal and written communication skills. Ability to work independently and collaboratively in a dynamic team environment. Growth mindset and a focus on delivering long-term, sustainable solutions. The Ideal Candidate Will: Be a strong collaborator who builds trust and inspires confidence. Influence and communicate effectively with peers, stakeholders, and leadership. Focus on outcomes, not personal preferences—prioritizing what’s best for the team and the business. Think creatively and solve complex problems with minimal direction. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 4 days ago
5.0 years
0 Lacs
India
On-site
Experience and Qualifications Bachelor’s or advanced degree in Computer Science or a closely related field 5+ years of professional experience in DevOps, with at least 1/2 years in Linux / Unix Very strong in core CS concepts around operating systems, networks, and systems architecture including web services Strong scripting experience in Python and Bash Deep experience administering, running, and deploying AWS-based services Strong knowledge of PostgreSQL, redis, Elasticache (OpenSearch), Neptune internals, performance tuning, query optimization, and indexing strategies Experience with Aurora, redis, Elasticache, Neptune architecture, including replication, failover, and backup/restore mechanisms Familiarity with AWS services commonly used with above services (e.g., RDS, CloudWatch, IAM, VPC, Lambda) Experience with high availability and disaster recovery solutions like Aurora, OpenSearch, Neptune, Redis Experience with database migration tools like AWS DMS, pg_dump, or logical replication. Solid experience with Terraform, Packer, and Docker or their equivalents Knowledge of security protocols and certificate infrastructure. Strong debugging, troubleshooting, and problem-solving skills Broad experience with cloud-hosted applications including virtualization platforms, relational and non-relational data stores, reverse proxies, and orchestration platforms Curiosity, continuous learning, and drive to continually raise the bar Strong partnering and communication skills Great to have Experience and Qualifications Past experience as a senior developer or application architect is strongly preferred. Experience working with, and preferably designing, a system compliant to any security framework (PCI DSS, ISO 27000, HIPPA, SOC 2, ...)
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Saviynt is an identity authority platform built to power and protect the world at work. In a world of digital transformation, where organizations are faced with increasing cyber risk but cannot afford defensive measures to slow down progress, Saviynt’s Enterprise Identity Cloud gives customers unparalleled visibility, control and intelligence to better defend against threats while empowering users with right-time, right-level access to the digital technologies and tools they need to do their best work. Summary: The Customer Success Manager I will manage customer loyalty and adoption of Saviynt’s innovative products and services using our customers’ business objectives and priorities as the foundation of the work they perform. The CSM will be responsible for driving value-based outcomes by providing customer categorization, oversight, adoption recommendations, opportunities for increased service, and metrics analysis. In addition, the CSM will coordinate routine health checks and any required remediation to ensure our customers stay on track towards their goals. Optimal performance of this role results in greater customer happiness, retention, and expansion of Saviynt’s business -- all tied to a customer who is eager to recommend Saviynt to others. WHAT YOU WILL BE DOING Serve as the primary point of contact for customers after implementation Manage the subscription renewal pipeline and maintain cognizance of customer health in order to proactively eliminate barriers to adoption and value Participate with the Sales team to provide a strong customer-focused sales, orientation, and launch engagement process Develop a deep, trusting relationship with customer key personnel and larger teams to seek and develop up-sell / cross-sell opportunities Coordinate and conduct meetings between customers and Saviynt cross-functional teams to solve problems and advance Customer adoption; ensure post-meeting follow-ups and action-item completion Monitor and identify product utilization trends, providing feedback to Saviynt cross-functional teams to support continuous improvement -- finding ways to better support customer use cases and corporate identity strategies Communicate with implementation Partners supporting Saviynt customers and seek opportunities to improve outcomes and relationships in the context of customer adoption Plan education for customers on new features and releases Act as the voice of the customer and collect feedback to drive continuous improvement across all areas including product WHAT YOU BRING Bachelors degree in computer science, engineering, or a related field Knowledge and experience in Identity and Access Management (IAM) valuable ; cybersecurity and/or compliance background also very valuable Strong knowledge of cloud, hybrid, and on-premise IT architectures and deployment models History of being able to understand technical and complex software environments and bridge the gap in terms of communicating those concepts in language meaningful to the business; similarly, being able to translate business needs to potential technical solutions 10+ yrs of experience in customer facing roles including: customer success management/account management, Professional Services for complex software implementations with companies across a variety of industries Tenacious desire to see customers succeed and thrive Previous experience within a fast paced, growing SaaS organization Demonstrated ability to manage customer relationships and work through potentially difficult challenges to achieve positive outcomes Cheerful willingness to be a hands-on contributor and stay detail-focused while maintaining an outcome-based perspective Experience in process improvement, decision-making, planning, analysis, and service excellence. Available to customer via Zoom during EMEA hours
Posted 4 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company : Our Client Corporation provides digital engineering and technology services to Forbes Global 2000 companies worldwide. Our Engineering First approach ensures we can execute all ideas and creatively solve pressing business challenges. With industry expertise and empowered agile teams, we prioritize execution early in the process for impactful results. We combine logic, creativity and curiosity to build, solve, and create. Every day, we help clients engage with new technology paradigms, creatively building solutions that solve their most pressing business challenges and move them to the forefront of their industry. Job Title : Python Key Skills : Python, RESTful APIs, kafka, AWS Job Locations : Pune / Hyd Experience : 5-8 years Education Qualification : Any Degree Graduation Work Mode : Hybird Employment Type : Contract Notice Period : Immediate - 15 Days Job Description Key Skills 1. Design, develop and maintain scalable RESTful and asynchronous APIs using fastapi 2. Build and design data pipelines integrated with kafka for real time streaming and message driven architectures 3. Implement robust, backend services with a focus on modularity , testability and cloud readiness 4. Write unit and integration test cases 5. Proficiency with AWS services like ECS, EC2, Lambda , cloudwatch, IAM etc 6. Solid working knowledge on mongodb, including data modelling, aggregation pipelines and performance tuning
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: Java Developer Location: Pune, Hyderabad Experience: 6+ yrs Job Type: Contract to hire(Min 1+ yr) Notice Period: Immediate joiners Job Description: 1. Design, develop and maintain backend services and microservices using java spring boot. 2. Build Scalable RESTful APIs and backend modules that interact with mongodb using kafka. 3. Implement event driven architecture and real time data processing using kafka. 4. Robust logging, monitoring, and error handling. 5. Proficiency with AWS services like ECS, EC2, Lambda , cloudwatch, IAM etc.
Posted 4 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Cloud Architect with DevOps Location: Noida (Hybrid) Job Type: Full-Time | Permanent Experience: 7+ years Job Summary: We are seeking an experienced Cloud Architect with strong DevOps expertise to lead and support our cloud transformation journey. The ideal candidate will be responsible for designing scalable and secure cloud architectures, driving cloud migration from traditional ETL tools (e.g., IBM DataStage) to modern cloud-native solutions, and enabling DevOps automation and best practices. The candidate must also have strong hands-on experience with Spark and Snowflake , along with a strong background in optimizing cloud performance and addressing cloud security vulnerabilities. Key Responsibilities: Design and implement scalable, secure, and high-performance cloud architectures in AWS, Azure, or GCP. Lead the cloud migration of ETL workloads from IBM DataStage to cloud-native or Spark-based pipelines. Architect and maintain Snowflake data warehouse solutions , ensuring high performance and cost optimization. Implement DevOps best practices , including CI/CD pipelines, infrastructure as code (IaC), monitoring, and logging. Drive automation and operational efficiency across build, deployment, and environment provisioning processes. Proactively identify and remediate cloud security vulnerabilities , ensuring compliance with industry best practices. Collaborate with cross-functional teams including data engineers, application developers, and cybersecurity teams. Provide architectural guidance on Spark-based big data processing pipelines in cloud environments. Support troubleshooting, performance tuning, and optimization across platforms and tools. Required Qualifications: 7+ years of experience in cloud architecture, DevOps engineering, and data platform modernization. Strong expertise in AWS, Azure, or GCP cloud platforms. Proficient in Apache Spark for large-scale data processing. Hands-on experience with Snowflake architecture, performance tuning, and data governance. Deep knowledge of DevOps tools : Terraform, Jenkins, Git, Docker, Kubernetes, Ansible, etc. Experience with cloud migration , especially from legacy ETL tools like IBM DataStage. Strong scripting and automation skills in Python, Bash, or PowerShell . Good understanding of networking, cloud security , IAM, VPCs, and compliance standards. Experience implementing CI/CD pipelines , observability, and incident response in cloud environments. Preferred Qualifications: Certification in one or more cloud platforms (e.g., AWS Solutions Architect, Azure Architect). Experience with data lake and lakehouse architectures . Familiarity with modern data orchestration tools like Airflow, DBT, or Glue. Working knowledge of Agile methodologies and DevOps culture. Familiarity with cost management and optimization in cloud deployments.
Posted 4 days ago
8.0 years
0 Lacs
India
Remote
Voice AI Scheduling (Scale‑Ready, Multi‑Tenant) — Remote Company: Apex Dental Systems Location: Remote (must overlap 7+ hours with 8am–5pm Pacific / America‑Vancouver) Type: Full Time Engineer with ramp up into CTO Compensation: Engineer: $2000USD/month + 1% equity Upon promotion to CTO: $4000USD/month + 2% equity About Us Apex Dental Systems builds voice AI reception for dental/orthodontic clinics. We connect real phone calls (Retell AI + telephony) to booked appointments via NexHealth and, over time, direct PMS connectors. We’re moving from pilot to scale across 50–100+ clinics with high reliability and tight cost control. The Mission Own the scale‑ready backend platform : multi‑tenant onboarding automation, secure configuration management, rate‑limit and retries, SLO‑backed reliability, cost observability, and compliance (HIPAA/PIPEDA). Your work allows us to onboard dozens of clinics per week with minutes, not days , of setup. Outcomes You’ll Deliver in the First 4–6 Weeks Multi‑tenant architecture with tenant isolation, role‑based access (RBAC), and per‑clinic secrets (env‑less runtime or AWS Secrets Manager). Onboarding automation that reduces per‑clinic setup to ≤60 minutes : provider/location/appointment‑type sync, ID mapping, test calls, and health checks. Hardened tool endpoints used by the voice agent (Retell function calling): availability_search, appointment_book, appointment_reschedule, appointment_cancel, patient_find_or_create, note_create, warm_transfer. Reliability controls : idempotency keys, timeouts, retries with backoff, circuit breakers; graceful fallbacks + warm transfer. Observability & SLOs : structured logs, metrics, tracing; dashboards for p50/p95 latency , error rates, booking success %, transfers, cost per minute/call; alerts to Slack. Security & compliance : PHI minimization, at‑rest and in‑transit encryption, access logging, data‑retention policy, BAA‑aware configuration. Cost guardrails : per‑tenant budget meters for voice minutes/LLM/TTS usage and anomaly alerts. KPIs you’ll move: Median tool‑call latency < 800 ms (p95 < 1500 ms) ≥ 80% booking/reschedule success without human handoff (eligible calls) 99.9%+ middleware availability < 1% tool‑level error rate (after retries) ≤ 60 min time‑to‑onboard a new clinic (target 30 min by week 6) Responsibilities Design, implement, and document multi‑tenant REST/JSON services consumed by the voice agent. Integrate NexHealth now; design extension points for direct PMS (OpenDental/Dentrix/Eaglesoft/Dolphin) later. Build sync jobs to keep providers/locations/appointment types up‑to‑date (with caching via Redis, invalidation, and backfills). Implement idempotent booking flows with conflict detection and safe retries; log every state transition. Stand up observability (metrics/logs/traces) and alerting; define SLOs/SLA and on‑call basics. Ship CI/CD with linting, tests (unit, contract, integration), and minimal load tests. Enforce secrets management , least‑privilege IAM, and a clean audit trail . Partner with our conversation designer to refine tool schemas and edge‑case flows (insurance screening, multi‑location routing). Mentor a mid‑level engineer and coordinate with ops for smooth rollouts. Minimum Qualifications 5–8+ years building production backend systems (you’ve owned a system in prod). Expert in Node.js (TypeScript) or Python (FastAPI/Nest/Express). Deep experience with external API integrations (auth, pagination, rate limits, webhooks). Postgres (schema design, migrations) and Redis (caching, locks). Production reliability patterns: retries/backoff, timeouts, idempotency , circuit breakers. Observability: metrics, tracing, log correlation; incident triage. Security/compliance mindset; comfortable handling sensitive data flows. Strong written English; crisp architectural docs and PRs. Nice‑to‑Have Retell AI (or similar voice/LLM with function calling and barge‑in), Twilio/SIP . NexHealth or other healthcare scheduling APIs; PMS/EHR familiarity. HIPAA/PIPEDA exposure, SOC 2‑style controls. OpenTelemetry, Prometheus/Grafana, Sentry; AWS/GCP; Terraform; Docker/Kubernetes. High‑volume, low‑latency systems experience. Our Stack (target) Runtime: Node.js (TypeScript) or Python (FastAPI) Data: Postgres, Redis Infra: AWS (ECS/EKS or Fargate), Terraform, GitHub Actions Integrations: Retell AI (voice), NexHealth (scheduling), Twilio/SIP (telephony) Observability: OpenTelemetry + Prometheus/Grafana or Cloud provider equivalents How We Work Remote‑first; async‑friendly; 4+ hours overlap with Pacific time. Code in company repos, NDAs/PIAs/BAAs , DCO/CLA, and strict access hygiene. We optimize for reliability and patient privacy over quick hacks. Interview Process (fast, 7–10 days) Intro (20–30 min): Your background, past scale/reliability wins. Take‑home (90 min, paid for finalists): Implement availability_search + appointment_book against a stubbed NexHealth‑like API. Include idempotency keys, retries with backoff, timeouts, and basic tests. Provide a short runbook and a dashboard sketch for p95 latency & error‑rate alerts. Deep‑dive (60 min): Review your code; discuss multi‑tenant design, secrets, SLOs, and cost control. Final (30–45 min): Collaboration & comms. How to Apply Email info@apexdentalsystems.com with subject “Senior Backend — Scale‑Ready Voice AI” and include: CV + GitHub/portfolio 5–10 lines on a system you made multi‑tenant (what changed?) A time you prevented double bookings or handled idempotency at scale Your preferred stack (Node+TS or Python), availability, and comp expectations
Posted 4 days ago
0.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10367193 Date posted 07/30/2025 End Date 08/11/2025 City Chennai State/Region Tamil Nadu Country India Additional Locations Bengaluru, Karnataka Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Solutions Architecture What does a successful DevOps Engineer do? A successful DevOps Engineer in our BFSI (Banking, Financial Services, and Insurance) Fintech IT organization is responsible for overseeing multiple technical projects, and ensuring the delivery of high-quality, scalable, and secure. They play a critical role in shaping the technical strategy, fostering a collaborative team environment, and driving innovation. Key responsibilities include: Infrastructure as Code (IaC): Use AWS CloudFormation or Terraform to provision and manage infrastructure consistently. CI/CD Pipelines: Build automated pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for seamless integration and delivery. Monitoring & Logging: Implement observability with AWS CloudWatch, X-Ray, and CloudTrail to track performance and security events. Security & Compliance: Manage IAM roles, enforce encryption, and integrate AWS Config and Security Hub for governance. Containerization: Deploy and manage containers using Amazon ECS, EKS, or Fargate. Serverless Architecture: Leverage AWS Lambda, API Gateway, and DynamoDB for lightweight, scalable solutions. Cost Optimization: Use AWS Trusted Advisor and Cost Explorer to monitor usage and reduce unnecessary spend. What you will need to have: Education: Bachelor's degree in a related field. Mandatory AWS Solution Architect Professional Certification Experience: 8 to 10 years of experience in DevOps Engineer role, with significant experience in the BFSI or fintech sector and a proven track record in managing teams and leading technical projects. Proficiency in scripting languages (Python, Bash) Experience with containerization (Docker, Kubernetes) Familiarity with Infrastructure as Code (IaC) Strong understanding of AWS services and architecture Knowledge of DevOps tools (Git, Jenkins, CodeDeploy) What would be great to have: Category Tools & Services : Automation CloudFormation, Terraform, Ansible CI/CD CodePipeline, Jenkins, GitHub Actions Monitoring CloudWatch, X-Ray, ELK Stack, Prometheus Security IAM, KMS, AWS WAF, GuardDuty, Security Hub Containers Docker, ECS, EKS, Fargate Serverless Lambda, API Gateway, Step Functions Storage & Compute EC2, S3, RDS, Auto Scaling Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The responsibilities for this role include capturing requirements, analyzing data, and defining processes. You will collaborate with platform and application teams globally to gather Secrets Management requirements. Engaging key stakeholders from various areas such as Identity and Access Management, Architecture, Cyber Security, and Global Business will be crucial in defining a target operating model for Secrets. Additionally, you will interact with all regions, including Highly Regulated countries, to capture specific Secrets Management requirements. Your role will also involve preparing detailed requirements documentation for approval by senior stakeholders. To excel in this role, you should have experience in requirements gathering within a technical environment, including DevOps, Jenkins, and CI/CD Pipelines on both Cloud and on-premise infrastructure platforms. A solid technical understanding while working with infrastructure and application teams is essential. Strong skills in data analysis and process mapping are required, along with previous hands-on experience in an IAM or PAM migration project. You must have a proven track record of collaborating with technical, Cybersecurity, and operations teams, and possess a good technical understanding of IAM and controls capabilities and requirements. Experience in defining or supporting IAM Control Frameworks is preferred. Strong stakeholder engagement and organizational skills, coupled with excellent communication abilities, are key for success in this position. Being a positive, proactive team player within a large program is vital. Desirable skills for this role include previous experience in Secrets Management, working with global teams, and familiarity with Agile methodologies, as well as knowledge of JIRA and Confluence.,
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough