Jobs
Interviews

17543 Terraform Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Title: Senior Infrastructure Specialist Experience: 8+ Years Skills: Kubernetes, Infrastructure, Containers, Linux, Cloud, Infrastructure as Code (IaC) Department: IT Reports To: Tech Lead Location: Pune (Hybrid) Role Summary We are seeking a highly experienced Senior Infrastructure Specialist to lead and manage scalable infrastructure and container-based environments. This role focuses on Kubernetes orchestration , automation , and maintaining secure, reliable, and efficient platform services. You’ll play a critical role in evolving infrastructure systems using modern DevOps practices, and driving the adoption of containerization and cloud-native technologies across the organization. Key Responsibilities Design, automate, and maintain CI/CD pipelines for OS image creation. Independently create, manage, and deploy hypervisor templates. Manage and scale the Vanderlande Container Platform across test, acceptance, and production environments. Administer and optimize 150+ downstream Kubernetes clusters , ensuring performance, reliability, and uptime. Improve solutions related to container platforms , edge computing , and virtualization . Lead the transition from VMware OVAs to a Kubernetes-based virtualization architecture, incorporating insights from Proof of Concept (PoC) initiatives. Focus on platform automation , using Infrastructure as Code to minimize manual tasks. Ensure security hardening and compliance for all infrastructure components. Collaborate closely with development, DevOps, and security teams to drive container adoption and lifecycle management. Required Qualifications & Skills 8+ years of infrastructure engineering experience. Deep expertise in Kubernetes architecture, deployment, and management. Strong background in Linux systems administration and troubleshooting. Proficiency with cloud platforms (AWS, Azure, GCP). Hands-on experience with Infrastructure as Code tools like Terraform and Ansible. CI/CD development experience (GitLab, Jenkins, ArgoCD, etc.). Familiarity with virtualization technologies (VMware, KVM). Key Skills (Core) Kubernetes (cluster management, Helm, operators, upgrades) Containerization (Docker, container runtime, image security) Cloud-Native Infrastructure Linux System Engineering Infrastructure as Code (IaC) DevOps & Automation Tools Security & Compliance in container platforms Soft Skills Proactive and solution-oriented mindset. Strong communication and cross-functional collaboration. Analytical thinking with the ability to troubleshoot complex issues. Time management and the ability to deliver under pressure. Preferred Qualifications CKA / CKAD Certification (Kubernetes) Cloud certifications (AWS/Azure/GCP) Experience in implementing container security and compliance tools (e.g., Aqua, Prisma, Trivy) Exposure to GitOps tools like ArgoCD or Flux Monitoring and alerting experience (Prometheus, Grafana, ELK stack) Internal Key Relationships: DevOps, Platform, and Development teams Cloud Infrastructure Teams Cybersecurity and Governance groups External Technology vendors and third-party platform providers External consultants and cloud service partners Role Dimensions Ownership of high-scale Kubernetes infrastructure Strategic modernization of infrastructure environments Coordination of multi-cluster container platforms Success Measures (KPIs) Uptime and reliability of container platforms Reduction in manual deployment and provisioning tasks Successful Kubernetes migration from legacy systems Cluster performance and security compliance Team enablement and automation adoption Competency Framework Alignment Kubernetes Mastery: Deep expertise in managing and optimizing Kubernetes clusters. Infrastructure Automation: Creating reliable, repeatable infrastructure workflows. Containerization Leadership: Driving adoption and best practices for container platforms. Strategic Execution: Aligning infrastructure strategy with enterprise goals. Collaboration: Building bridges between development, operations, and security 👉 If this sounds like you — or someone you know—DM me or send an email to nivetha.s@eminds.ai

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a talented and experienced Java Lead to join our dynamic engineering team. As a Java Lead, you will play a critical role in architecting, developing, and maintaining our enterprise-level Java applications while providing technical leadership to the team. Position Overview Experience Range: 7-10 years Location: Chennai Key Responsibilities Architect and develop robust, scalable Java applications using modern Java frameworks and technologies Provide technical leadership and mentorship to development team members Design and implement high-performance, scalable solutions using Java, Spring Framework, and cloud technologies Optimize application architecture for performance, reliability, and scalability Lead code reviews and ensure adherence to coding standards and best practices Collaborate with cross-functional teams to define, design, and implement new features Participate in architecture discussions and technical planning Troubleshoot and resolve complex technical issues Required Qualifications 7-10 years of experience in Java software development Solid understanding of Java Multithreading. Good exposure to ELK usage and ELK APIs. Minimum 5 years of experience with Java 8, with recent experience in Java 11 or above Strong coding skills with deep Java language fundamentals including Lambda expressions and Streams API Excellent knowledge of Spring Framework ecosystem (not just Spring Boot) Strong foundations in data structures, algorithms, and design patterns CI/CD pipeline development and optimization Experience with containerization using Docker Familiarity with AWS deployment services, including Amazon Elastic Kubernetes Service (EKS) and Terraform Preferred Qualifications Experience with Apache Kafka Microservices architecture design and implementation Experience with performance tuning and optimization Technical Skills Programming Languages: Java (v11+) Frameworks: Spring Framework, Spring Boot Databases: MongoDB, SQL databases Cloud Services: AWS, EKS Tools: Docker, Terraform, Git Plus: Kafka experience If you're passionate about Java development, have an architect's mindset, and are ready to lead a team of talented engineers, we want to hear from you!

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our technology services client is seeking multiple Cloud Network Security Engineer to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: Cloud Network Security Engineer Experience: 7+ Years Location: Bengaluru Notice Period: Immediate- 15 Days Mandatory Skills: Cloud Security, Networking, AWS, GCP, Data center, Microservices, Terraform, Containers Job Description: The Information Security Engineer role will be responsible for the automation of delivery of network security in the public cloud initiatives globally within this will be an integral role for the network security engineering and delivery for public cloud including automation scalability Job Responsibility Engage with multiple cloud and networking stakeholders understand the requirements for complex enterprise cloud environment Provide cloud and network security expertise and guidance to the cloud programs including Infrastructure as a Service IaaS Platform as a Service PaaS and Cloud Application Architecture subprograms Collaborate with enterprise architects and SMEs to deliver complete security architecture solutions Lead Cloud network security initiatives with designs patterns and develop deliver scalable and security terraform modules Look for opportunities to automate the network security configurations and implementations Monitor and optimize the patterns and modules Minimum Qualifications 7 years of overall experience in datacentre cloud and network 5 years of handson experience in AWS and GCP cloud 3 years of experience in Containers Kubernetes and micro services 3 years of experience in Terraform 3 years of experience in advance networking in public cloud Understanding of classical or cloudnative design patterns is required Knowledge of security configuration management container security endpoint security and secrets management as they are applied to cloud applications Knowledge of network architecture proxy infrastructure and programs to support network access and enablement Experience with multiple Information Security domains such as Infrastructure Vulnerability Data Loss Prevention End User Security Network Security Internet Security Identity Access Management etc Preferred Qualifications Bachelors Degree in computer science computer engineering or related field or equivalent experience is preferred Terraform certification preferred Cloud Engineering or Security Certification preferred AWS Certified Solutions Architect professional AWS Certified Advanced Networking Speciality AWS Certified Security GCP Cloud Architect GCP Network Engineer GCP Cloud Security Engineer or similar If you are interested, share the updated resume to madhuri.p@s3staff.com

Posted 1 day ago

Apply

100.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our client is a global technology company headquartered in Santa Clara, California. it focuses on helping organisations harness the power of data to drive digital transformation, enhance operational efficiency, and achieve sustainability. over 100 years of experience in operational technology (OT) and more than 60 years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetise their data to improve their customers’ experiences, develop new revenue streams and lower their business costs. Over 80% of the Fortune 100 trust our client for data solutions. The company’s consolidated revenues for fiscal 2024 (ended March 31, 2024). approximately $57.5 billion USD., and the company has approximately 296,000 employees worldwide. It delivers digital solutions utilising Lumada in five sectors, including Mobility, Smart Life, Industry, Energy and IT, to increase our customers’ social, environmental and economic value. Job Title: Kafka Developer Location: All Locations Experience: 10+ Job Type: Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: Kafka Developer (Event Streaming), Apache Kafka, Kafka Connect, Kafka Streams Job description: Experience - 10+ Years Deep understanding of Apache Kafka and the surrounding ecosystem (schema registries, Kafka Connect, Kafka Streams, Kafka client libraries, Spark Structured Streaming) Independently resolving issues when deploying and setting up any sort of infrastructure (like cloud services) or applications (e.g., a Spring Boot application on App Service, or a Python Azure Function) Deep understanding of Kubernetes and Docker Deep understanding/knowledge of public clouds, especially focusing on Azure, with focus on services such as Azure Functions, Azure Logic Apps, Azure App Service, Azure Kubernetes Service, OpenShift (Kubernetes in general), Azure Databricks, Azure Stream Analytics, Azure Event Hubs, Azure Service Bus, Azure Event Grid, Azure Data Lake Gen2/Azure Blob Storage Deep understanding of private networking and network topologies used in enterprises (e.g., private endpoints, VNets/VPCs, Private Link, DNS, firewalls, Hub Spoke topologies, etc.) Deep understanding of software design patterns and knowledge of popular programming languages (Java, C#, JavaScript, Python) with in-depth knowledge of at least one programming language Deep understanding of Infrastructure-as-Code concepts, and in-depth knowledge of at least one popular scripting tool and language (Terraform, Helm, Bash), and CI/CD framework (GitHub Actions, Azure Pipelines, etc.) Experience with debugging software applications and setting up logging, observing, and monitoring Good understanding of developer portals such as Backstage, API Catalogs with OpenAPI and AsyncAPI specifications, templating (Jinja), API gateways (Azure API Management, IBM API Connect), WebSockets Experience in building event-driven microservices and REST API

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Site Reliability Engineer Location: Hyderabad Experience: 5+ years Employment Type: Full-Time About the Role We are building scalable, reliable, and high-performance cloud-native applications on Microsoft Azure . We are looking for a Site Reliability Engineer (SRE) with expertise in OpenTelemetry (OTEL) and Azure to drive observability, reliability, and operational excellence across our infrastructure. As an SRE, you will: Design, implement, and maintain our observability stack using OpenTelemetry standards. Ensure the availability, performance, and scalability of production systems. Collaborate with development teams to embed reliability practices, automate operational tasks, and respond to incidents quickly and effectively. Key Responsibilities 1. Observability & OpenTelemetry (OTEL) Build and manage an observability platform with OpenTelemetry for distributed tracing, metrics, and logs. Instrument applications (Java, Python, Node.js) for end-to-end telemetry. Configure OTEL Collectors to export telemetry to Prometheus, Grafana, Jaeger, Loki, Tempo, Azure Monitor, and Application Insights . Develop custom instrumentation and semantic conventions. Establish robust alerting and anomaly detection using Azure Monitor, Prometheus Alertmanager, etc. Create dashboards (Grafana, Azure Dashboards) for real-time insights. Continuously enhance observability by adopting best practices and new OTEL features . 2. Azure SRE Responsibilities Reliability & Performance: Monitor systems, identify bottlenecks, and implement scaling and optimization strategies. Incident Response: Participate in on-call rotations, lead incident resolution, conduct RCA, and maintain runbooks/playbooks. Automation & IaC: Automate infrastructure and operational tasks using Azure DevOps, Terraform, Azure Bicep, PowerShell, or Bash . CI/CD Integration: Embed reliability and observability checks into CI/CD pipelines. Capacity Planning: Analyze usage patterns, plan for scalability, and optimize Azure resource costs . Security & Compliance: Apply security best practices and ensure compliance. Collaboration: Mentor development teams on SRE and observability practices . Required Skills & Experience 5+ years in SRE, DevOps, or a related infrastructure role. Strong expertise with OpenTelemetry (instrumentation, collection, processing). Hands-on with Azure cloud services (Monitor, Log Analytics, Application Insights). Proficient with Infrastructure as Code (Terraform, Azure Bicep, ARM) . Skilled in scripting/automation (Python, PowerShell, Bash). Experience with Docker and Kubernetes/AKS . Familiarity with observability backends: Grafana, Loki, Tempo, Prometheus, Jaeger . Deep understanding of distributed systems and microservices . Excellent problem-solving, analytical, and communication skills . Preferred Qualifications Azure certifications (AZ-104, AZ-400). Experience with chaos engineering . Knowledge of SLOs, SLIs, and error budgets . Familiarity with database monitoring (PostgreSQL, Azure SQL). Experience in high-availability or regulated environments . Education Bachelor’s degree in Computer Science, IT , or a related technical field (or equivalent practical experience).

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

Remote

Job Description As a Senior Databricks Data Engineer , your responsibilities include: Technical Requirements Gathering and Development of Functional Specifications. Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services. Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks. Develop and integrate custom machine learning models using Azure Machine Learning, MLflow, and other relevant libraries. Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure. Conducting troubleshooting on data models. Work with the Agile multicultural teams in Asia, the EU, Canada, and the USA. Profile Requirements For this position of Azure Databricks Data Engineer , we are looking for someone with: (Required) At least 6 years of experience in developing and maintaining data pipelines using Azure Databricks, Azure Data Factory, and Spark. (Required) Hands on experience with Unity Catalog (Required) Fluent English communication and soft skills. (Required) Knowledge and Experience in CICD such as Terraform, ARM, and Bicep Script. (Required) Solid technical skills in Python, and SQL (Required) Familiarity with machine learning concepts, tools, and libraries (e.g., TensorFlow, PyTorch, Scikit-learn, MLflow) (Required) Strong problem-solving, communication, and analytical skills. Willingness to learn and expand technical skills in other areas. Adastra Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen, and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “anniversary/annual” bonus, we distribute bonuses on a monthly recurring basis as an instant gratification for performance and this bonus is practically unlimited. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives as a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working on a Western project also means nobody bothers you during the whole day but you may have to jump on a scrum call in the evening to talk to your team overseas. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra Thailand is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature more than 400 courses on our Training Repo and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

On-site

🚀 We’re Hiring: Data Engineer – Microsoft Fabric | Azure | Terraform 📍 Location : Dubai 🧠 Experience : 6+ Years 📦 Benefits : Visa Sponsorship | Medical Insurance | Airfare | 15 Days Accommodation Are you a Data Engineer ready to take your expertise global? Join Xebia and be a part of a high-impact team building robust, scalable data solutions on Microsoft Fabric and Azure platforms. 🔧 Key Responsibilities : Collaborate with stakeholders to convert business needs into scalable technical designs Build end-to-end data pipelines using Microsoft Fabric Design and optimize data lake zones (raw, curated, consumption) Create business logic using PySpark, T-SQL, Stored Procedures Implement messaging via Azure Service Bus Seamlessly integrate cloud with on-prem systems Automate infrastructure using Terraform (IaC) Optimize performance and cost on Azure architecture Implement monitoring & logging with Azure Monitor , Log Analytics , App Insights Create and maintain detailed architecture documentation ✅ Requirements : Minimum 6 years of data engineering experience Strong experience with Microsoft Fabric , Azure , Terraform Knowledge of Azure networking, compute, storage, and security Experience working in enterprise-level environments Valid Passport is Mandatory Should be able to join within 2 weeks 📧 How to Apply : Send your updated CV to vijay.s@xebia.com with the following details : Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Chennai / Jaipur / Dubai) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Passport Number (mandatory) 🚨 Note : Only candidates who can join within 2 weeks will be considered Applications without passport details will be rejected Let’s build the future of data, together. #Xebia #Hiring #DataEngineer #MicrosoftFabric #AzureJobs #Terraform #DubaiJobs #CloudEngineering #FastJoinersOnly #ImmediateJoiners #InfraAsCode

Posted 1 day ago

Apply

7.0 years

0 Lacs

Kochi, Kerala, India

On-site

🚀 We're Hiring: Lead DevOps Engineer 📍 Location: Trivandrum / Kochi 🕒 Notice Period: Immediate Joiners Only 💰 CTC: Up to ₹19 LPA 📅 Experience: 7+ years (Min. 5 years in relevant DevOps roles) Are you a hands-on DevOps expert ready to take the lead on cloud infrastructure and automation? We’re looking for a Lead DevOps Engineer to drive DevOps strategy, infrastructure management, and CI/CD automation in a high-performance engineering environment. 🔧 Key Responsibilities Design and manage scalable, secure, and high-availability AWS infrastructure. Build and maintain robust CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Implement Infrastructure as Code (IaC) using Terraform and GitHub workflows. Manage and orchestrate containerized environments using Docker and Kubernetes. Automate configuration management using tools like Ansible, Puppet, or Chef. Proactively monitor cloud performance and resolve issues using tools like CloudWatch, Grafana, Prometheus, ELK Stack, etc. Ensure DevSecOps best practices across cloud and application environments. Collaborate with cross-functional teams to troubleshoot, improve reliability, and optimize processes. 📌 What We're Looking For 7+ years of total experience with minimum 5 years in core DevOps roles. Deep hands-on experience with AWS (must-have); Azure or GCP is a plus. Expertise in CI/CD pipeline design, containerization (Docker, K8s), and IaC. Strong understanding of DevSecOps tools like Artifactory, SonarQube, Snyk, Black Duck, etc. Experience with monitoring tools (Nagios, CloudTrail, Kibana, etc.). Knowledge of API Security, Container Security, and AWS Cloud Security. Strong grasp of Agile/SDLC processes and version control (Git/GitHub workflows). 🎓 Qualifications Bachelor’s degree in Computer Science / IT or equivalent. Relevant certifications in AWS / DevOps are an added advantage. ✅ Why Join Us? Opportunity to lead critical DevOps initiatives in a fast-paced environment. Work with cutting-edge cloud-native and DevSecOps tools. Collaborative culture with room for innovation and continuous learning.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking to hire Mobile Developer with Skills in below skills. 5+ years of hands-on experience with Reactnative Familiar in AWS, Azure, and GCP infrastructure services Experience with GitLab, Jira, and related DevOps tools Strong understanding of JavaScript Hands on experience working within the team of large infrastructure Experience with NodeJS, Java is preferrable. Contributions to open-source Terraform modules Familiarity with monitoring tools and observability platforms

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps Engineer Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Cyderes (Cyber Defense and Response) is a pure-play, full life-cycle cybersecurity services provider with award-winning managed security services, identity and access management, and professional services designed to manage the cybersecurity risks of enterprise clients. We specialize in multi-technology, complex environments with the in speed and agility needed to tackle the most advanced cyber threats. We leverage our global scale and decades of experience to accelerate our clients’ cyber outcomes through a full lifecycle of cybersecurity services. We are a global company with operating centers in the United States, Canada, the United Kingdom, and India. About the Job: A managed service CyberArk engineer plays a key role in the daily operations of the system, ensuring it’s running efficiently and that requests via ticketing systems are completed with SLAs. The candidate requires hands-on experience with CyberArk Privileged Access Solution and managing its “business as usual” type tasks The role frequently involves investigating and resolving technical problems, so demonstrated troubleshooting skills are required, along with effective organization skills, the ability to multi-task, and efficient time-management skills. Responsibilities: Responsible for implementing part or all the technical solution to the client, in accordance with an agreed technical design. Occasionally responsible for providing a detailed technical design for enterprise solutions Understands a broad spectrum of Privileged Access Management technology to provide part or all detailed technical design which meets customer requirements Develop maintainable, scalable, and secure source code that meets business requirements and team standards Able to communicate and present complex issues with assurance and confidence. Demonstrates the use of consulting skills including questioning, listening, ideas development, permission, and rapport, and influencing Able to discuss (within own area of expertise) requirements with a customer, and to challenge and clarify when appropriate. From the requirements, able to develop a high-level design or plan, and then estimate the amount of effort required to deliver. Able to advise the engagement owner about the risks associated with this work package Requirements: Minimum 2 to 5 years administration experience in working on large, complex CyberArk environments Experience on working with PCloud environment & Standalone/High-Availability - cluster environment for CyberArk Core PAS modules Experience with onboarding different platform accounts such as Windows, Unix, Databases (Oracle, Sybase, MSSQL, MySQL), Web applications (AWS/Azure), Network/Security Devices, etc Knowledge on integrating CyberArk solution with HSM, LDAP, SIEM, SNMP, ticketing system and multi- factor authentication etc Knowledge on custom PSM Connectors/CPM Plugins (with AutoIT/Shell Scripting) and good knowledge on auto-detection configuration and usage of Discovery Scanning tools Experience on AAM (CP and CCP) Knowledge upgrading CyberArk version and managing patch/upgrade/security fix strategy Knowledge in DR Drill activities, Backup, Reporting etc Knowledge in Vault OS/Infra patching and connector management Perform health check monitoring on all the CyberArk servers to ensure consistent availability of system to end users Experience/Knowledge troubleshooting CyberArk Core PAS (Vault, PVWA, CPM, PSM, PSMP), AAM, HTML5 gateway & Remote Access (Alero) In-depth Knowledge of ITIL processes like Incident Management, Problem Management, Configuration Management and Change Management processes Advanced trouble shooting skills and identifying the severity of the issue, ability to resolve issues quickly to account/customer satisfaction and conduct RCA Documentation of technical configuration Provide operational support on a 24x7/8X5 rotation basis Provides production support and participates in on-call rotation CyberArk Defender/Sentry is mandatory. Add-on: CDE-PAM/CDE-CPC Add-on (Key Values) Knowledge/Experience on CyberArk EPM & WPM Knowledge in Remote Access (Alero), HTML5GW, Identity, Conjur etc Knowledge in integrating Conjur with various DevOps tools like Jenkins, Ansible, Kubernetes, OpenShift, Gitlab, and Terraform Cyderes i s an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to race, religion, color, sex, age, disability, sexual orientation, genetic information, national origin, or veteran status. Note: This job posting is intended for direct applicants only. We request that outside recruiters do not contact us regarding this position.

Posted 1 day ago

Apply

0.0 - 12.0 years

0 Lacs

Gurugram, Haryana

On-site

About the Role: Grade Level (for internal use): 11 The Team: As a Performance Test Engineer , you’ll be an integral part of the EDM Performance Testing Team. You will collaborate closely with product managers, developers, and fellow engineers to ensure the performance integrity of the system. We foster an open, inclusive environment where all perspectives are valued. Our team is focused on driving innovation, leveraging cutting-edge AI technologies, and maximizing engineering efficiency . We prioritize clean architecture, real-time performance, and data quality. What’s in it for you: This is the place to utilize your existing Performance Testing/Engineering skills while being exposed to the latest cutting-edge technologies available in the market. You will have opportunities to p rovide Quality (Performance) gateways to build a next-generation product that consumers can rely on for their business decisions. Core Technical Qualifications: Expertise in creating, enhancing (handling dynamic data and inputs), and executing scripts in JMeter or Gatling . Expertise in Performance Testing of REST APIs , M icroservices and C ontainerized applications with test data creation methodologies Leverage IaC tools like Terraform , CloudFormation , or Ansible for test environment provisioning and configuration management. Familiarity with modern cloud platforms, particularly AWS or equivalent , with Docker and Kubernetes. Hands-on experience with scripting languages like Python and PowerShell and V ersion control tools like GIT /GitLab/ Azure DevOps . Proficien cy in developing and debugging queries in MS SQL / PostgreSQL . Expertise in at least one Application Performance Management (APM) tool like AppDynamics, New Relic, or Dynatrace and in Monitoring tools like Splunk/Grafana/Prometheus . Familiarity with at least one open-source application profiling tool . Demonstrated experience using AI-enhanced development tools (e.g., GitHub Copilot, Replit AI, ChatGPT, Amazon CodeWhisperer or any equivalent ) to discover bugs, automate repetitive tasks, and speed up testing cycles. Comfortable applying AI/ML concepts (even at a basic level) to optimize workflows and test strategies, perform intelligent data analysis, or support decision-making within the product. Familiarity with prompt engineering , LLM-assisted testing , or using AI to automate documentation, code scans, or monitoring. E duc ation & Experience: Bachelor’s degree in computer science, Software Engineering, or a related field — or equivalent practical experience . 9-12 years of overall testing experience with deep expertise in performance testing frameworks, tools, and modern software testing practices Soft Skills: Lead performance testing activities across multiple projects, ensuring timely and high-quality deliveries. Strong problem-solving skills with a growth mindset and openness to innovation. Excellent communication and cross-functional collaboration abilities. Capable of managing priorities and meeting deadlines in a fast-paced, continuously evolving environment. Additional Preferred Qualifications: Strong problem-solving skills with a growth mindset and openness to AI-powered innovation. Excellent communication and cross-functional collaboration abilities. Capable of managing priorities and meeting deadlines in a fast-paced, continuously evolving environment. Collaborate with product managers, developers, and other QA team members to ensure test coverage and quality. Ability to handle performance testing for both front-end and back-end applications. Why Join Us? We're at the forefront of a technology transformation, adopting AI-first thinking across our engineering organization. You'll be empowered to push boundaries, embrace automation, and shape the future of performance testing in a hybrid human-AI environment. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 316891 Posted On: 2025-08-04 Location: Gurgaon, Haryana, India

Posted 1 day ago

Apply

0.0 - 15.0 years

0 Lacs

Gurugram, Haryana

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a Quality Engineering – Lead , you’ll be an integral part of our collaborative, agile testing team. You’ll work closely with product managers, UI/UX designers, developers and fellow engineers to bring innovative ideas to life and deliver high-quality, bug free software solutions. We foster an open, inclusive environment where all perspectives are valued. Our team is focused on driving innovation, leveraging cutting-edge AI technologies, and maximizing engineering efficiency . We prioritize clean architecture, real-time performance, and data quality. What We’re Looking For: Experienced automation leader who can design and implement comprehensive QA automation strategies that align with business goals and enhance overall product quality. This includes defining and implementing test automation strategy including roadmaps, tools, frameworks, approach, quality metrics, testing methodologies, and driving continuous improvement initiatives. Expertise in identifying potential risks in the software development lifecycle and implementing proactive measures to mitigate them, ensuring high-quality outputs and reducing time-to-market. Strong experience in collaborating with cross-functional teams to gather requirements and feedback, ensuring that QA strategies are effectively communicated and aligned with stakeholder expectations. Ability to establish and monitor key performance indicators (KPIs) for QA processes, using data-driven insights to refine testing strategies and improve team performance. Core Technical Qualifications: Design, develop, and maintain robust test automation frameworks for API , UI , and system integration layers. Strong hands-on experience in Java/Java Script programming languages. Strong experience in UI automation tools/frameworks (e.g., Selenium, Cypress, Playwright). Hands-on experience with API testing tools/frameworks (e.g., Postman, Rest Assured, SoapUI). Hands-on e xperience with MS SQL Server , as well as NoSQL technologies like MongoDB or Cosmos DB. Integrate automated tests into CI/CD pipelines , collaborate closely with DevOps teams. Leverage IaC tools like Terraform , CloudFormation , or Ansible for test environment provisioning and configuration management. Familiarity with modern cloud platforms, particularly AWS or equivalent . Demonstrated experience using AI-enhanced development tools (e.g., GitHub Copilot, Replit AI, ChatGPT, Amazon CodeWhisperer or any equivalent ) to discover bugs, automate repetitive tasks, and speed up testing cycles. Comfortable applying AI/ML concepts (even at a basic level) to optimize workflows and test strategies, perform intelligent data analysis, or support decision-making within the product. Familiarity with prompt engineering , LLM-assisted testing , or using AI to automate documentation, code scans, or monitoring. Education & Experience: Bachelor’s degree in computer science, Software Engineering, or a related field — or equivalent practical experience. 1 2 -15 years of overall automation testing experience with deep expertise in t esting a utomation frameworks, tools , and modern software testing practices. Soft Skills: Lead QA activities across multiple projects, ensuring timely and high-quality deliveries. Strong problem-solving skills with a growth mindset and openness to AI-powered innovation. Excellent communication and cross-functional collaboration abilities. Capable of managing priorities and meeting deadlines in a fast-paced, continuously evolving environment. Collaborate with product managers, developers, and other QA team members to ensure test coverage and quality. Additional Preferred Qualifications: Proven experience in testing large-scale distributed systems in a cloud environment. Background in testing Windows-based production systems , network configurations, and server performance in the cloud. Strong scripting and automation skills (PowerShell, Bash, Python) — bonus if paired with AI-based infrastructure tools. AWS certification or similar credentials are a plus. Experience explaining technical concepts clearly to both technical and non-technical stakeholders. Experience using AI to accelerate DevOps, CI/CD pipelines, or observability tooling is a major advantage. Why Join Us? We're at the forefront of a technology transformation — not only adopting AI-first thinking across our engineering organization but actively building with it. You'll be empowered to push boundaries, embrace automation, and shape the future of full stack development in a hybrid human-AI environment. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 316121 Posted On: 2025-08-04 Location: Gurgaon, Haryana, India

Posted 1 day ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Date Opened 08/04/2025 Job Type Full time Industry IT Services City Bangalore North, Hyderabad State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Proficient with AWS DevOps (Repos, Pipelines, Artifacts, Boards, Test Plans) Infrastructure as Code (IaC): Terraform, AWS CloudFormation, AWS CDK Scripting & Automation: Python, Bash, Shell, AML Containerization & Orchestration: Docker, Kubernetes (EKS), Amazon ECS, Helm CI/CD & Build Tools: WS CodePipeline, AWS CodeBuild, Jenkins, GitHub Actions, GitLab CI/CD Configuration Management: Ansible, Chef, Puppet

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Overview: Hiring a seasoned DevOps/Platform Engineer to drive automation, platform reliability, and robust software delivery pipelines using both conventional practices and advanced AI/ML techniques in global, regulated enterprises. Key Responsibilities: Design, deploy, and manage CI/CD pipelines and infrastructure automation, leveraging AI for efficiency. Implement infrastructure as code and environment provisioning, including AI-powered script and configuration generation. Monitor and optimize system reliability, availability, and security—using AI/ML for predictive alerts and intelligent scaling. Establish DevSecOps best practices, compliance controls, and MLOps (CI/CD for ML workflows). Troubleshoot complex infrastructure, deployment, and ML model serving issues. Core Technical & AI/ML Skills: Deep experience with IaC tools (Terraform, Ansible, CloudFormation), scripting (Python, Bash), and AI-enhanced automation. Build and maintain CI/CD (Jenkins, GitLab CI, GitHub Actions, ArgoCD). Cloud infrastructure (AWS, Azure, GCP), container orchestration (Kubernetes, Docker). Logging, monitoring, and observability (Prometheus, Grafana, ELK/EFK), including AI-driven log analysis and incident prediction. Experience supporting MLOps: deploying ML workflows, ensuring model traceability and compliance. Use of AI assistants and workflow tools to script, manage incidents, and enforce security policies (OPA, Sentinel). Soft Skills: Influential in driving DevOps culture change. Strong communicator across development, security, and business teams. Mentorship and process rigor, open-minded to AI-driven productivity improvements. High accountability, initiative, and a proactive approach to emerging technologies.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Overview: Seeking a highly experienced Cloud Architect to design and oversee robust, scalable, and secure cloud solutions for enterprise environments in oil & gas and other regulated industries (energy, finance, government). Key Responsibilities: Architect end-to-end cloud solutions (public, private, hybrid) with a focus on reliability, security, compliance, and scalability. Lead migration and modernization projects of mission-critical applications. Define cloud governance, access management, and security best practices. Ensure high availability, disaster recovery, and business continuity strategies. Enable and operationalize AI/ML workloads—deploying, scaling, and maintaining data pipelines and model inferencing on the cloud. Collaborate with stakeholders to align cloud strategy with organizational goals. Core Technical & AI/ML Skills: Deep knowledge of at least one leading cloud platform (AWS, Azure, or Google Cloud Platform). Experience with cloud-native AI/ML services (e.g., AWS SageMaker, Azure ML, Google AI Platform), enabling secure integration and operationalization of models. Infrastructure as Code (Terraform, CloudFormation, ARM) and AI-powered IaC generation tools. Cloud security (IAM, encryption, compliance frameworks), including governance for AI/ML pipelines. Application modernization (containers, Kubernetes, serverless architectures). CI/CD, cloud automation, and AI-powered scripting/plugins (e.g., GitHub Copilot). Enterprise networking concepts (VPC, VPN, hybrid connectivity). Monitoring, observability, and cost optimization. Soft Skills: Excellent communication with technical and business stakeholders. Proven leadership and mentoring of cross-functional teams. Strong problem-solving in high-stakes, regulated environments. Proactive in evaluating and introducing AI tools for productivity and process improvement.

Posted 1 day ago

Apply

6.0 - 10.0 years

15 - 21 Lacs

Bengaluru, Karnataka, India

On-site

Shift Timing: 1:00 PM - 10:00 PM IST (Flexible to work off-hours/weekends as required) Experience: 6 to 10 Years (Relevant) Notice Period: Immediate to 15 Days Job Summary: We are seeking an experienced Firewall Engineer with deep expertise in implementing, upgrading, and maintaining enterprise firewall solutions. The ideal candidate will have hands-on experience with Checkpoint, Fortinet, Palo Alto, and cloud firewall technologies, along with a strong understanding of enterprise security, networking, and scripting/automation. Key Responsibilities Implement and build firewall solutions from scratch. Plan and execute firewall upgrades, updates, and migrations. Install, configure, maintain, and decommission Checkpoint and Fortinet firewalls. Work with hardware appliances and virtual firewalls in hybrid cloud environments. Configure and manage firewall policies and controls via management servers. Monitor network activity and enforce security policies and standards. Troubleshoot network and firewall-related issues, including connectivity drops. Analyze network traffic and ensure rule configuration aligns with security best practices. Administer virtual private networks (VPNs). Conduct regular software and security updates across firewall platforms. Respond to security incidents and investigate potential breaches. Participate in security audits, assessments, and compliance reviews. Create and maintain detailed documentation for firewall configurations and policies. Design and implement security solutions to secure enterprise networks. Provide mentorship to junior engineers and support their technical development. Required Skills Strong hands-on experience with: Checkpoint. Fortinet. Palo Alto. Cloud Firewalls (AWS/Azure/GCP). Solid understanding of enterprise security architecture and advanced networking: Routing, Switching, Load Balancing. Routing Protocols: BGP, OSPF. Familiarity with scripting/automation tools: Python, Ansible, Terraform, or similar. Working knowledge of Linux operating systems and CLI. Excellent troubleshooting skills and ability to perform root cause analysis. Preferred Qualities Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Willingness to support off-hours and weekend work as needed.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructure. Your typical day will involve collaborating with various teams to ensure the architecture's viability, security, and performance while creating proofs of concept to validate your designs. You will engage in hands-on development and troubleshooting, ensuring that the solutions meet the required standards and specifications. Additionally, you will be involved in continuous improvement efforts, optimizing existing systems and processes to enhance efficiency and effectiveness in cloud operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate the performance of cloud applications to ensure optimal functionality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture and deployment strategies. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructure. Your typical day will involve collaborating with various teams to ensure the architecture's viability, security, and performance while creating proofs of concept to validate your designs. You will engage in hands-on development and troubleshooting, ensuring that the solutions meet the required standards and specifications. Additionally, you will be involved in continuous improvement efforts, optimizing existing systems and processes to enhance efficiency and effectiveness in cloud operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate the performance of cloud applications to ensure optimal functionality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture and deployment strategies. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructure. Your typical day will involve collaborating with various teams to ensure the architecture's viability, security, and performance while creating proofs of concept to validate your designs. You will engage in hands-on development and troubleshooting, ensuring that the solutions meet the required standards and specifications. Additionally, you will be involved in continuous improvement efforts, optimizing existing systems and processes to enhance efficiency and effectiveness in cloud operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate the performance of cloud applications to ensure optimal functionality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture and deployment strategies. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Location: Navi Mumbai SettleMint India SettleMint India was formed in 2019, with headquarters in Delhi, India. The India team focuses on client deliverables and the development of high-performance low-code Blockchain. We operate from Delhi, along with certain project locations. We are looking for a DevOps Engineer to join our client site at Navi Mumbai. Responsibilities Building efficient and reusable applications and abstractions. Driving design, implementation, and support of large-scale infrastructure. You will participate in the design and implementation phases for new and existing products Dive deep to resolve problems at their root and troubleshoot services related to the big data stack in our Linux infrastructure. Developing policies and procedures that improve overall platform stability and participate in shared on-call schedules. Ensure that post-production operational processes/deliverables are well designed and implemented prior to the project moving into the solution support phase. Enhance and maintain our monitoring infrastructure. Develop automation tools for managing our on-premises infrastructure. Define and create development procedures, processes, and scripts to drive a standard software development lifecycle. Assist in the evaluation, selection, and implementation of new technologies with product teams to ensure adherence to architecture guidelines for new technology introduction Provide technical leadership in establishing standards and guidelines. Facilitate collaboration between development and operations teams throughout the application lifecycle. Requirements And Skills Must have 3 - 5 years of hands-on experience in the field of DevOps should have working knowledge of Kubernetes. At least 3 years of experience on Jenkins/Azure DevOps and other similar CI/CD platforms such as GitHub. Extensive experience in assessing DevOps maturity state for application with ability to define improvement roadmap. Extensive experience in assessing and design code branching, merging, and tagging strategies on GitHub, Bitbucket, SVN and Git. Extensive experience in defining and implementing DevSecOps(security) strategy for customers. Authentication, SonarQube, Nexus IQ (Sonar type IQ), Fortify(SAST), Sonar type firewall and other similar tools in Jenkins/Azure DevOps CI / CD pipeline. Experience of deploying APIs and micro services as Docker images/container as Helm chart pkg(terraform) on cloud cluster i,e Kubernetes clusters using CI/CD Pipelines. Experience of Terraform/Ansible is must for infra build(provisioning), configuration & deployment. Experience of protocols implementation like HTTPS, TCP, UDP, DNS, TELNET,ICMP,SSH,GOSSIP etc. Good to have Winscp/Putty tools experience. Experience of Azure / AWS cloud and its PaaS and IaaS Services Experience with the Monitoring tool like Data dog/New relics/Dynatrace/Prometheus/Grafana etc Qualifications And Certification B.E / B.tech MCA CKA / CKAD Certified

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Enterprise Architecture services, will provide you with the opportunity to bring our clients a competitive advantage through defining their technology objectives, assessing solution options, and devising architectural solutions that help them achieve both strategic goals and meet operational requirements. We help build software and design data platforms, manage large volumes of client data, develop compliance procedures for data management, and continually researching new technologies to drive innovation and sustainable change. Responsibilities Design solutions for cloud (e.g. AWS, Azure and GCP) which are optimal, secure, efficient, scalable, resilient and reliable, and at the same time. are compliant with Industry cloud standards and policies.J6 +Design strategies and tools to deploy, monitor, and administer cloud applications and the underlying services for cloud (e.g. Azure, AWS, GCP and private cloud) +Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS. +Should have experience on Infra set-up, Availability Zones, Cloud Services deployment, connectivity set-up inline with AWS, Azure, GCP and OCI +Should have skill set around GCP, AWS, Oracle Cloud and Azure and Multi Cloud Strategy Excellent hands-on experience in implementation and design of Cloud infrastructure environments using modern CICD deployment patterns with Terraform, Jenkins, and Git. Strong understanding of application build and Deployments with CICD pipelines. Strong experience application containerization and orchestration with Docker and Kubernetes in Cloud Platforms. Mandatory Skill Sets Architect & Design solutions for cloud (AWS, Azure, GCP and private cloud), Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS... Design of Cloud infrastructure environments...application containerization and orchestration with Docker and Kubernetes in Cloud Preferred Skill Sets Certification would be preferred in AWS, Azure, GCP and private cloud, Kubernetes. Years Of Experience Required 5+ years Education Qualification B.E./ B.Tech / MCA/ M.E/ M.TECH/ MBA/ PGDM/ B.SC - IT. All qualifications should be in regular full-time mode with no extension of course duration due to backlogs Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology, Bachelor of Engineering, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Devops, Microsoft Azure DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 day ago

Apply

15.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Job Title: Project Manager I Senior Technical SaaS & Enterprise Architecture (Virtual CTO) Location: Jaipur (Flexible / Remote / Global Travel as Required) Experience: 9 – 15+ Years Industry Domains: GovTech, EdTech, FinTech, InsurTech, Manufacturing, HealthTech, B2B Commerce, AI/ML, Cloud-Native Platforms About the Role: We are looking for a technically hands-on, vision-driven Senior Technical SaaS & Enterprise Architect (Virtual CTO) to drive architectural excellence, digital transformation, and platform innovation for cloud-native SaaS ecosystems. You will be instrumental in designing scalable multi-tenant architectures, building engineering roadmaps, implementing AI/ML modules, and ensuring compliance across critical industries. This role is ideal for a SaaS technology leader with deep domain knowledge, cloud-native expertise, and a demonstrated ability to scale enterprise-grade products globally. Core Responsibilities: 🔧 Platform Architecture & Engineering Leadership Design and implement multi-tenant, event-driven SaaS architectures optimized for scalability, high availability (HA), and global failover. Architect and lead cloud-native deployments using AWS (EKS, Lambda, S3, RDS), Azure, and GCP, integrating CI/CD, service meshes, and service discovery. Adopt containerized microservices with robust orchestration via Kubernetes and advanced ingress/load balancing (NGINX, Istio). ☁️ Cloud & DevOps Strategy Lead DevOps culture and tooling strategy including Docker, Kubernetes, Terraform, Helm, ArgoCD, and GitHub Actions. Implement observability-first platforms leveraging ELK stack, Prometheus/Grafana, OpenTelemetry, and Datadog. Execute cloud cost optimization initiatives to reduce TCO and increase ROI using tagging, FinOps practices, and usage analytics. 🔐 Enterprise Security & Compliance Establish and enforce end-to-end security protocols including IAM, encryption at rest/in-transit, WAFs, API rate limiting, and secure CI/CD pipelines. Ensure compliance with regulatory frameworks (SOC2, HIPAA, ISO 27001, GDPR), performing regular audits and managing access control with OAuth2, SAML, SCIM. 📊 Data, AI/ML & Intelligence Systems Architect streaming data pipelines using Kafka, AWS Kinesis, and Redis Streams for real-time event processing. Develop and embed ML models (using BERT, SpaCy, TensorFlow, SageMaker) for predictive analytics, fraud detection, or personalization. Integrate AI features such as NLP-driven chatbots, recommendation engines, computer vision modules, and OCR workflows into the product stack. 🔗 System Integration & Interoperability Lead third-party integrations including eKYC, payment gateways (Stripe, Razorpay), ERP/CRM systems (SAP, Salesforce), and messaging protocols. Leverage REST, GraphQL, and gRPC APIs with robust versioning and schema validation via OpenAPI/Swagger. 🌍 Team Scaling & Product Lifecycle Build and manage cross-geo engineering pods with Agile best practices (Scrum, SAFe) using tools like Jira, Confluence, Notion, and ClickUp. Enable internationalization and localization: multi-language (i18n), multi-region deployments, and edge caching with Cloudflare/CDNs. Collaborate with Product, Compliance, and Marketing to align tech strategy with go-to-market, growth, and monetization initiatives. Required Skills & Expertise: Architecture: Cloud-native, multi-tenant SaaS, microservices, event-driven systems Languages: Java, Python, Go, Kotlin, TypeScript Frameworks: Spring Boot, Node.js, .NET Core, React, Angular, Vue.js, PHP Cloud & Infra: AWS (EKS, Lambda, RDS), Azure, GCP, Terraform, Docker, Kubernetes Security & Compliance: SOC2, ISO 27001, GDPR, HIPAA, OAuth2, SAML, WAF DevOps: CI/CD, ArgoCD, GitHub Actions, Helm, observability tools (Grafana, ELK, Datadog) AI/ML: NLP (BERT, SpaCy), Vision (AWS Rekognition), Predictive Models Messaging & DB: Kafka, RabbitMQ, PostgreSQL, DynamoDB, MongoDB, Redis Education & Certifications: Bachelor of Engineering (Computer Science) Executive Program in Digital Transformation Strategy – London Business School Postgraduate Program in AI/ML – University of Texas, Austin AWS Certified Solutions Architect – Professional Certified Kubernetes Administrator (CKA) ISO 27001 Lead Implementer Certified Scrum Professional (CSP) Key Achievements: Designed and launched 25+ global SaaS platforms, generating $200M+ in ARR. Built and led distributed tech teams across 7+ countries, delivering full SDLC ownership. Drove $80M+ in VC/PE funding through technical validation and due diligence support. Winner of Top 50 CTOs in APAC – 2024, SaaS Product Leader of the Year – 2022. Delivered award-winning solutions across GovTech (CityGridGov), EdTech (EduFlick), InsurTech (VeriSurance), and Manufacturing (ForgePulse). Ideal Candidate Traits: Technical evangelist with a deep product mindset and business acumen. Equally effective in the boardroom and war room—comfortable leading strategy and code reviews. Passionate about developer experience, automation, and scalable, future-proof architectures. Track record of zero-to-one and one-to-N product evolution in high-growth environments. Engagement Models: Full-Time / Part-Time Chief Technology Officer (CTO) Fractional CTO for funded startups or transformation projects SaaS Modernization Consultant for legacy to cloud-native migration Technical Due Diligence Advisor for VCs, M&A, and institutional investors Compliance Strategy Leader: SOC2, ISO, HIPAA implementations We invite you to follow our LinkedIn page for updates on this and other exciting opportunities: Kuchoriya TechSoft LinkedIn. If you're interested in applying, please share your updated CV along with your previous company experience letter at alexmartinitexpert@gmail.com . We look forward to hearing from you!

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: AI Architect Location: Bangalore & Hyderabad Mandatory Skill sAgentic Framework (Langgraph/Autogen/CrewAI )Prometheus/Grafana/ELK Stac kMachine Learning /Deep Learning Frameworks (TensorFlow/PyTorch/Keras )Hugging Face Transformer sCloud computing platforms (AWS/Azure/Google Cloud Platform )DevOps /MLOps /LLM Op sDocker, Kubernetes, DevOps tools like Jenkins and GitLab CI/CD .Fine Tuning of LLMs or SLMs (PALM2, GPT4, LLAMA etc )Terraform or CloudFormatio n 1. Work on the Implementation and Solution delivery of the AI applications leading the team across onshore/offshore and should be able to cross-collaborate across all the AI stream s .2. Design end-to-end AI applications, ensuring integration across multiple commercial and open source tool s.3. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy. Work along with data engineering teams to ensure smooth data flows, quality, and governance across data source s.4. Lead the design and implementations of reference architectures, roadmaps, and best practices for AI application s.5. Fast adaptability with the emerging technologies and methodologies, recommending proven innovation s.6. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring system s.7. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performanc e.8. Ensure the implementation supports scalability, reliability, maintainability, and security best practice s.9. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risk s.10. Oversee the lifecycle of AI application development—from design to development, testing, deployment, and optimizatio n.11. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigatio n.12. Provide mentorship to engineering teams and foster a culture of continuous learnin g.13. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practice s.Educati on :- Bachelor’s/Master’s degree in Computer Scien ce- Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to hav e) Required Skil ls:• The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and Crew AI.• Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Ker as.• Understanding of Deep learning and NLP algorithms – RNN, CNN, LSTM, transformers architecture e tc.• Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutio ns.• Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/ CD.• Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deploymen ts.• Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured da ta.• Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communicatio ns.

Posted 1 day ago

Apply

14.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Job Title: Senior Technical SaaS & Enterprise Architecture Location: Jaipur (Flexible / Remote / Global Travel as Required) Experience Required: 10 - 14+ Years Industry Domains: GovTech, EdTech, FinTech, InsurTech, Manufacturing, HealthTech, B2B Commerce, AI, etc. Salary : TBD About the Role: We are seeking a visionary and execution-focused Senior Technical SaaS & Enterprise Architecture [Virtual Chief Technology Officer (CTO)] to lead the end-to-end technology strategy, architecture, and innovation for scalable, cloud-native SaaS platforms. This role is ideal for a dynamic leader with a proven track record of delivering high-impact digital transformation, guiding startups through funding rounds, and driving sustainable growth via secure, compliant, and AI-driven technology ecosystems. Key Responsibilities: Architect and deliver multi-tenant SaaS platforms using cloud-native solutions (AWS, Azure, GCP). Lead cross-functional engineering teams across geographies and mentor senior tech leaders. Drive adoption of microservices, Kubernetes, DevOps, and AI/ML into modern SaaS stacks. Define and execute enterprise security strategies (SOC2, GDPR, HIPAA, ISO 27001). Serve as technical face to investors , participating in due diligence, fundraising, and M&A processes. Build data engineering pipelines , streaming architectures, and scalable analytics systems. Collaborate with product, compliance, and business teams to align tech roadmaps with business goals. Oversee cost optimization , infrastructure modernization, and technical debt reduction initiatives. Enable international product rollouts including multi-language, multi-region deployments. Lead innovation in AI/ML , IoT, and predictive systems across various industry-specific SaaS offerings. Required Skills & Expertise: Architecture: Cloud-native, multi-tenant SaaS, microservices, event-driven systems Languages: Java, Python, Go, Kotlin, TypeScript Frameworks: Spring Boot, Node.js, .NET Core, React, Angular, Vue.js, PHP Cloud & Infra: AWS (EKS, Lambda, RDS), Azure, GCP, Terraform, Docker, Kubernetes Security & Compliance: SOC2, ISO 27001, GDPR, HIPAA, OAuth2, SAML, WAF DevOps: CI/CD, ArgoCD, GitHub Actions, Helm, observability tools (Grafana, ELK, Datadog) AI/ML: NLP (BERT, SpaCy), Vision (AWS Rekognition), Predictive Models Messaging & DB: Kafka, RabbitMQ, PostgreSQL, DynamoDB, MongoDB, Redis Education & Certifications: B.E. in Computer Science Executive Program in Digital Transformation Postgraduate Program in AI/ML AWS Certified Solutions Architect – Professional Scrum Alliance Key Achievements: Launched SaaS products Led teams across 7+ countries ; helped raise funding Delivered platforms in GovTech, EdTech, FinTech, InsurTech, Manufacturing Ideal Candidate Profile: Deep product mindset with ability to translate vision into architecture Hands-on and strategic; excels in startup and scale-up environments Excellent communicator with board-level presence Strong understanding of business impact through technology Need Immediate joiner (Mention Joining date, passport number, all education diploma & degree, certifications) Engagement Types: Full-Time CTO / Fractional CTO / Interim CTO SaaS Modernization Consultant Tech Due Diligence Partner for Startups and Investors Follow our page for more update for this openings: https://www.linkedin.com/company/kuchoriyatechsoft/?viewAsMember=true and submit your updated CV, previous company experience letter at alexmartinitexpert@gmail.com

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies