Job Description Role & responsibilities Develop, implement, and streamline HR processes and policies to improve operational efficiency. Oversee HR systems and databases, ensuring accuracy and data integrity. Monitor HR metrics and generate reports to support decision-making and strategic planning. Manage employee onboarding and offboarding processes for Uk and India. Ensuring compliance with company policies and legal requirements of the UK. Support employee records management, including updates and maintenance of personal information and employment status. Ensure compliance with federal, state, and local employment laws and regulations of the United Kingdom. Serve as a point of contact for HR-related inquiries and issues from employees and management. Provide support for performance management, employee relations, and compensation. Preferred Candidate Profile Strong understanding of HR practices, labor laws, and compliance requirements. Excellent organizational and project management skills, with the ability to handle multiple priorities and deadlines. Strong interpersonal and communication skills, with the ability to work effectively with employees at all levels. Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint) and HR software. Perks And Benefits Ability to analyze HR data and metrics to drive continuous improvement. Experience in a fast-paced or high-growth environment is a plus. Strong problem-solving skills and attention to detail. Immediate joiners are preferred Work in the UK shift i.e. 8am - 4pm of UK 2 days mandatory work from office check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Job Description Job summary: We are seeking a highly experienced DevOps Engineer with a deep focus on AWS and Infrastructure as Code using Terraform . This role requires a self-motivated individual who thrives in a fast-paced, highly technical environment. The ideal candidate is someone who can design, implement, and manage scalable cloud infrastructure while also taking full ownership of projects from start to finish. Key Responsibilities Design, implement, and manage infrastructure in AWS using Terraform Architect and maintain secure, scalable AWS environments including IAM, EC2, RDS, S3, EKS, and VPCs Manage Kubernetes clusters and containerized applications using Docker and EKS Support and maintain serverless applications using AWS Lambda and integrate with other AWS services like S3 Implement CI/CD pipelines and ensure infrastructure reliability and observability Administer Linux systems and create automation scripts using shell scripting Develop and manage database infrastructure, especially PostgreSQL, including schema migrations Troubleshoot and resolve complex infrastructure and networking issues Collaborate with cross-functional teams to deliver secure and robust DevOps solutions Requirements Technical Skills 2 years of experience in DevOps engineering and AWS Expert-level proficiency in Terraform and infrastructure as code best practices Deep understanding of the AWS ecosystem: IAM roles and permissions Network design and security EC2, RDS, S3, and EKS Strong hands-on experience with Docker and Kubernetes Experience building and managing serverless architectures (Lambda, API Gateway, S3) Proficiency with Linux, shell scripting, and common DevOps tools Familiarity with HTTP(S), DNS, web server configuration, and caching mechanisms Solid experience with PostgreSQL and data migration strategies Soft Skills Strong verbal and written communication skills across technical and non-technical stakeholders Demonstrated ability to take ownership and drive initiatives independently Proactive, self-directed, and highly organized Effective problem-solving skills and a detail-oriented mindset Preferred Qualifications Bachelor’s degree in Computer Science, Information Security, or a related field, or equivalent practical experience. AWS certifications (e.g., AWS Certified DevOps Engineer, Solutions Architect) Experience with monitoring and logging tools (e.g., CloudWatch, Prometheus, Grafana) Familiarity with Agile/Scrum methodologies Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Job Information Date Opened 30/06/2025 Job Type Full time Work Experience 5+ years Industry IT Services Salary 40L City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a Senior DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.
Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.
Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS) AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.
Job Information Date Opened 05/08/2025 Job Type Permanent Work Experience 5+ years Industry Government & Public Sector Salary 35 Lakhs City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About Scrumconnect Consulting: Scrumconnect Consulting, a multi-award-winning firm recognized with UKIT awards such as Best Public Sector IT Project, Digital Transformation Project of the Year, and a Special Award for Organisational Excellence during the pandemic, is at the forefront of innovation in tech consulting. Our work impacts over 40 million UK citizens, with successful projects in key government departments like the Department for Work and Pensions, Ministry of Justice, HM Passport Office, and more. Overview We’re seeking a seasoned Data Lead to own and deliver the data strategy for Govt client that uses services used by over 18m citizens encompassing three newly rebuilt benefits systems. You will define where our data capabilities are today, set the future vision, and architect the roadmap to get us there—then design the data team to execute it. About the Role This is a Senior Level, hands-on role for someone who loves building end-to-end AI solutions. You will work closely with the engineering and product teams to develop robust machine learning models and AI-driven features. This role is ideal for candidates who enjoy working in fast-paced environments, taking ownership of technical components, and solving real-world problems with AI. Responsibilities AI Development & Integration Develop, train, and optimize models in NLP, deep learning, or LLMs Build full model pipelines: data preprocessing, model training, evaluation, and deployment Work with tools like PyTorch, TensorFlow, Hugging Face, or JAX Integrate AI models into production environments with a focus on scalability and performance Contribute to AI feature prototyping and experimentation AI Infrastructure & Deployment Use cloud platforms (AWS, GCP, or Azure) to deploy AI solutions Utilize Docker and Kubernetes for containerized deployments Maintain model reliability, version control, and performance monitoring in production Data Engineering & MLOps Work on ETL processes and data pipelines to support model development Follow MLOps best practices for CI/CD, model tracking, and monitoring Ensure performance, scalability, and maintainability of AI systems Requirements Minimum Qualifications 7+ years of experience in software or AI/ML engineering roles Proven experience building and deploying ML models, especially in NLP or deep learning Proficient in Python and familiar with libraries like PyTorch, TensorFlow, or Hugging Face Experience with cloud services (AWS, GCP, or Azure) and container tools (Docker, Kubernetes) Good understanding of data structures, algorithms, and distributed systems Preferred Qualifications Experience working with large language models (LLMs) such as GPT, BERT, etc. Exposure to MLOps tools (e.g., MLflow, Weights & Biases, Kubeflow) Background in legal-tech or regulated domains (a plus, not required) Experience working in fast-paced startup environments or early-stage product development
Job Information Date Opened 27/08/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 10-15 LPA City Bangalore North Province Karnataka Country India Postal Code 560052 Job Description Position Overview We are seeking a passionate AI Automation Test Engineer to join our innovative UK based stealth startup, in Bengaluru. This role offers an exciting opportunity to shape the testing strategy for cutting-edge AI products while working in a fast-paced, dynamic environment. You'll be instrumental in ensuring the quality and reliability of our AI-driven solutions through comprehensive automated testing frameworks. Key Responsibilities AI/ML Testing Design and implement automated testing frameworks specifically for AI/ML models and systems Develop test strategies for model performance, accuracy, bias detection, and robustness Create automated pipelines for data validation, model training verification, and inference testing Implement A/B testing frameworks for AI model comparison and evaluation Test Automation Development Build and maintain end-to-end test automation suites using modern testing frameworks Develop API testing automation for microservices and ML model endpoints Create performance and load testing automation for AI inference systems Design automated regression testing for continuous integration/deployment pipelines Quality Assurance Collaborate with AI/ML engineers to define testing requirements and acceptance criteria Establish quality gates and metrics for AI model deployment Conduct exploratory testing of AI features and user experiences Monitor and analyse test results, providing actionable insights to development teams Infrastructure & DevOps Set up and maintain testing environments and test data management systems Integrate automated tests with CI/CD pipelines using tools like Jenkins, ArgoCD, or GitHub Actions Implement monitoring and alerting for test execution and system health Manage test data pipelines and synthetic data generation for AI model testing Required Qualifications Technical Skills Programming : Proficiency in Python, with experience in testing frameworks (pytest, unittest, Robot Framework) AI/ML Testing : Understanding of machine learning concepts, model evaluation metrics, and testing methodologies Automation Tools : Hands-on experience with Selenium, Appium, or similar web/mobile automation tools API Testing : Experience with REST/GraphQL API testing using tools like Postman, Newman, or requests library Version Control : Proficient with Git and collaborative development workflows Experience Requirements 3-6 years of experience in test automation and quality assurance 1-2 years of experience testing AI/ML systems or data-driven applications Experience with cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Familiarity with databases (SQL and NoSQL) and data validation techniques Preferred Qualifications Experience with MLOps tools and practices (MLflow, Kubeflow, or similar) Knowledge of performance testing tools (JMeter, Locust, or K6) Understanding of data science workflows and model lifecycle management Experience with monitoring tools (Prometheus, Grafana, ELK stack) Background in statistics or data analysis What We Offer Compensation & Benefits Competitive salary up to ₹15,00,000 per annum Performance-based bonuses and annual increments Comprehensive health insurance for you and your family Professional Growth Opportunity to work with cutting-edge AI technology in stealth mode Direct impact on product development and company direction Mentorship from experienced AI and engineering leaders Learning budget for courses, conferences, and certifications Work Environment Modern office space in Bengaluru with all necessary amenities Collaborative, innovation-driven culture Regular team events and knowledge-sharing sessions About the Role As an early-stage team member, you'll have the unique opportunity to build testing practices from the ground up, directly influence product quality, and grow with the company. This position is ideal for someone who thrives in ambiguous environments, enjoys solving complex technical challenges, and wants to be part of building something revolutionary in the AI space. Note : Due to our stealth mode status, specific product details will be shared during the interview process with qualified candidates who sign appropriate confidentiality agreements. Application Process Interested candidates should submit their resume along with: Brief cover letter explaining interest in AI testing and startup environments Any relevant certifications or achievements in testing or AI/ML domains We are an equal opportunity employer committed to diversity and inclusion. All qualified applicants will receive consideration regardless of race, gender, age, religion, sexual orientation, or disability status.
Job Information Date Opened 27/08/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 15-20 LPA City Bangalore Province Karnataka Country India Postal Code 560052 Job Description Position Overview We are seeking an experienced DevOps Engineer to architect and manage the infrastructure backbone of our revolutionary AI startup. This role offers an exceptional opportunity to build scalable, secure, and efficient systems that power next-generation AI applications. You'll work directly with our founding team to establish DevOps practices that will scale from MVP to enterprise-level solutions. Key Responsibilities Infrastructure & Cloud Management Design and implement scalable cloud infrastructure on AWS, Azure, or GCP for AI/ML workloads Architect and manage Kubernetes clusters optimised for ML training and inference Build and maintain infrastructure as code using Terraform, CloudFormation, or Pulumi Implement auto-scaling solutions for variable AI compute demands Manage GPU clusters and specialized hardware for deep learning workloads MLOps & AI Pipeline Management Design and implement CI/CD pipelines specifically for machine learning model deployment Build automated model training, validation, and deployment workflows Implement model versioning, experiment tracking, and artifact management systems Set up monitoring and alerting for ML model performance and data drift detection Create disaster recovery and rollback strategies for AI model deployments Platform Engineering Develop internal developer platforms and self-service tools for the engineering team Implement secure API gateways and microservices architecture for AI applications Build and maintain data pipelines for real-time and batch processing Design secrets management and security policies for sensitive AI data and models Establish logging, monitoring, and observability across all systems Security & Compliance Implement security best practices for AI systems and sensitive data handling Design and maintain network security, firewalls, and VPN configurations Establish backup and disaster recovery procedures for critical AI infrastructure Ensure compliance with data protection regulations and industry standards Conduct regular security audits and vulnerability assessments Performance Optimization Monitor and optimize infrastructure costs, especially for expensive GPU resources Implement caching strategies for AI inference and data processing Optimize container orchestration for maximum resource utilization Performance tune databases and storage systems for AI workloads Establish SLA monitoring and capacity planning procedures Required Qualifications Technical Expertise Cloud Platforms : 4+ years hands-on experience with AWS, Azure, or GCP Containerization : Expert-level Docker and Kubernetes skills with production experience Infrastructure as Code : Proficiency with Terraform, Ansible, or similar tools CI/CD : Experience building robust pipelines using Jenkins, GitLab CI, GitHub Actions, or Azure DevOps Programming : Strong scripting skills in Python, Bash, and familiarity with Go or Java AI/ML Infrastructure Knowledge Experience deploying and managing ML models in production environments Understanding of GPU computing, CUDA, and specialized AI hardware Familiarity with ML frameworks (TensorFlow, PyTorch, Scikit-learn) and their deployment requirements Knowledge of data engineering tools and big data processing (Spark, Kafka, Airflow) Experience with ML model serving platforms (MLflow, Kubeflow, Seldon, or TensorFlow Serving) DevOps Fundamentals 5-8 years of DevOps/SRE experience with demonstrated expertise in production systems Strong Linux administration skills and system performance optimization Experience with monitoring tools (Prometheus, Grafana, ELK/EFK stack, Datadog) Database management experience (PostgreSQL, MongoDB, Redis) with backup/recovery Network engineering knowledge including load balancers, CDNs, and service meshes Preferred Qualifications Previous experience in AI/ML startups or high-growth technology companies Certifications in cloud platforms (AWS Solutions Architect, Azure DevOps Engineer, etc.) Experience with edge computing and distributed AI inference systems Knowledge of data privacy frameworks and federated learning infrastructure Familiarity with FinOps practices for cloud cost optimization Experience with service mesh technologies (Istio, Linkerd, Consul Connect) What We Offer Compensation & Benefits Competitive salary up to ₹20,00,000 per annum Comprehensive health insurance with family coverage and wellness benefits Technical Growth Access to cutting-edge AI infrastructure and latest cloud technologies Opportunity to shape the technical architecture of a groundbreaking AI product Direct collaboration with world-class AI researchers and engineers Mentorship from experienced startup founders and tech leaders Work Environment Flexible working arrangements with hybrid and remote options Modern office in Bengaluru with high-end development workstations Unlimited learning resources and access to cloud credits for experimentation Fast-paced, innovation-driven culture with direct impact on product success Regular tech talks, hackathons, and team building activities Career Impact Ground-floor opportunity in a stealth-mode AI company with massive potential Chance to build infrastructure that will serve millions of users Direct reporting to CTO/Founders with significant decision-making authority Opportunity to lead and build the DevOps team as the company scales Potential for international expansion and technology leadership roles About This Opportunity Join us at the most exciting phase of our journey. As one of our first DevOps hires, you'll have unprecedented influence over our technical infrastructure and engineering culture. This role is perfect for someone who wants to combine deep technical expertise with entrepreneurial impact in the rapidly evolving AI landscape. You'll work on challenging problems like: Scaling AI training from single GPUs to multi-node clusters Implementing real-time AI inference at global scale Building secure, compliant infrastructure for sensitive AI applications Optimizing costs while maintaining high performance for variable AI workloads Required Mindset Strong problem-solving skills with ability to debug complex distributed systems Excellent communication skills for cross-functional collaboration Passion for automation, efficiency, and engineering excellence Interest in AI/ML technology and its infrastructure challenges Note : Due to our stealth mode status, specific product and technology details will be shared during the interview process with qualified candidates who execute appropriate NDAs. Application Requirements Please submit: Detailed resume highlighting relevant DevOps and AI infrastructure experience GitHub/GitLab profile showcasing infrastructure code and automation projects Brief cover letter explaining your interest in AI DevOps and startup environments Any relevant cloud certifications, case studies, or technical blog posts We are committed to building a diverse and inclusive team. All qualified applicants will receive equal consideration regardless of race, gender, age, religion, sexual orientation, disability status, or veteran status.
Job Information Date Opened 27/08/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 10-15 LPA City Bangalore North Province Karnataka Country India Postal Code 560052 Job Description Position Overview We are seeking a passionate AI Automation Test Engineer to join our innovative UK based stealth startup, in Bengaluru. This role offers an exciting opportunity to shape the testing strategy for cutting-edge AI products while working in a fast-paced, dynamic environment. You'll be instrumental in ensuring the quality and reliability of our AI-driven solutions through comprehensive automated testing frameworks. Key Responsibilities AI/ML Testing Design and implement automated testing frameworks specifically for AI/ML models and systems Develop test strategies for model performance, accuracy, bias detection, and robustness Create automated pipelines for data validation, model training verification, and inference testing Implement A/B testing frameworks for AI model comparison and evaluation Test Automation Development Build and maintain end-to-end test automation suites using modern testing frameworks Develop API testing automation for microservices and ML model endpoints Create performance and load testing automation for AI inference systems Design automated regression testing for continuous integration/deployment pipelines Quality Assurance Collaborate with AI/ML engineers to define testing requirements and acceptance criteria Establish quality gates and metrics for AI model deployment Conduct exploratory testing of AI features and user experiences Monitor and analyse test results, providing actionable insights to development teams Infrastructure & DevOps Set up and maintain testing environments and test data management systems Integrate automated tests with CI/CD pipelines using tools like Jenkins, ArgoCD, or GitHub Actions Implement monitoring and alerting for test execution and system health Manage test data pipelines and synthetic data generation for AI model testing Required Qualifications Technical Skills Programming : Proficiency in Python, with experience in testing frameworks (pytest, unittest, Robot Framework) AI/ML Testing : Understanding of machine learning concepts, model evaluation metrics, and testing methodologies Automation Tools : Hands-on experience with Selenium, Appium, or similar web/mobile automation tools API Testing : Experience with REST/GraphQL API testing using tools like Postman, Newman, or requests library Version Control : Proficient with Git and collaborative development workflows Experience Requirements 3-6 years of experience in test automation and quality assurance 1-2 years of experience testing AI/ML systems or data-driven applications Experience with cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Familiarity with databases (SQL and NoSQL) and data validation techniques Preferred Qualifications Experience with MLOps tools and practices (MLflow, Kubeflow, or similar) Knowledge of performance testing tools (JMeter, Locust, or K6) Understanding of data science workflows and model lifecycle management Experience with monitoring tools (Prometheus, Grafana, ELK stack) Background in statistics or data analysis What We Offer Compensation & Benefits Competitive salary up to ₹15,00,000 per annum Performance-based bonuses and annual increments Comprehensive health insurance for you and your family Professional Growth Opportunity to work with cutting-edge AI technology in stealth mode Direct impact on product development and company direction Mentorship from experienced AI and engineering leaders Learning budget for courses, conferences, and certifications Work Environment Modern office space in Bengaluru with all necessary amenities Collaborative, innovation-driven culture Regular team events and knowledge-sharing sessions About the Role As an early-stage team member, you'll have the unique opportunity to build testing practices from the ground up, directly influence product quality, and grow with the company. This position is ideal for someone who thrives in ambiguous environments, enjoys solving complex technical challenges, and wants to be part of building something revolutionary in the AI space. Note : Due to our stealth mode status, specific product details will be shared during the interview process with qualified candidates who sign appropriate confidentiality agreements. Application Process Interested candidates should submit their resume along with: Brief cover letter explaining interest in AI testing and startup environments Any relevant certifications or achievements in testing or AI/ML domains We are an equal opportunity employer committed to diversity and inclusion. All qualified applicants will receive consideration regardless of race, gender, age, religion, sexual orientation, or disability status.
Job Information Date Opened 27/08/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 15-20 LPA City Bangalore Province Karnataka Country India Postal Code 560052 Job Description Position Overview We are seeking an experienced DevOps Engineer to architect and manage the infrastructure backbone of our revolutionary AI startup. This role offers an exceptional opportunity to build scalable, secure, and efficient systems that power next-generation AI applications. You'll work directly with our founding team to establish DevOps practices that will scale from MVP to enterprise-level solutions. Key Responsibilities Infrastructure & Cloud Management Design and implement scalable cloud infrastructure on AWS, Azure, or GCP for AI/ML workloads Architect and manage Kubernetes clusters optimised for ML training and inference Build and maintain infrastructure as code using Terraform, CloudFormation, or Pulumi Implement auto-scaling solutions for variable AI compute demands Manage GPU clusters and specialized hardware for deep learning workloads MLOps & AI Pipeline Management Design and implement CI/CD pipelines specifically for machine learning model deployment Build automated model training, validation, and deployment workflows Implement model versioning, experiment tracking, and artifact management systems Set up monitoring and alerting for ML model performance and data drift detection Create disaster recovery and rollback strategies for AI model deployments Platform Engineering Develop internal developer platforms and self-service tools for the engineering team Implement secure API gateways and microservices architecture for AI applications Build and maintain data pipelines for real-time and batch processing Design secrets management and security policies for sensitive AI data and models Establish logging, monitoring, and observability across all systems Security & Compliance Implement security best practices for AI systems and sensitive data handling Design and maintain network security, firewalls, and VPN configurations Establish backup and disaster recovery procedures for critical AI infrastructure Ensure compliance with data protection regulations and industry standards Conduct regular security audits and vulnerability assessments Performance Optimization Monitor and optimize infrastructure costs, especially for expensive GPU resources Implement caching strategies for AI inference and data processing Optimize container orchestration for maximum resource utilization Performance tune databases and storage systems for AI workloads Establish SLA monitoring and capacity planning procedures Required Qualifications Technical Expertise Cloud Platforms : 4+ years hands-on experience with AWS, Azure, or GCP Containerization : Expert-level Docker and Kubernetes skills with production experience Infrastructure as Code : Proficiency with Terraform, Ansible, or similar tools CI/CD : Experience building robust pipelines using Jenkins, GitLab CI, GitHub Actions, or Azure DevOps Programming : Strong scripting skills in Python, Bash, and familiarity with Go or Java AI/ML Infrastructure Knowledge Experience deploying and managing ML models in production environments Understanding of GPU computing, CUDA, and specialized AI hardware Familiarity with ML frameworks (TensorFlow, PyTorch, Scikit-learn) and their deployment requirements Knowledge of data engineering tools and big data processing (Spark, Kafka, Airflow) Experience with ML model serving platforms (MLflow, Kubeflow, Seldon, or TensorFlow Serving) DevOps Fundamentals 5-8 years of DevOps/SRE experience with demonstrated expertise in production systems Strong Linux administration skills and system performance optimization Experience with monitoring tools (Prometheus, Grafana, ELK/EFK stack, Datadog) Database management experience (PostgreSQL, MongoDB, Redis) with backup/recovery Network engineering knowledge including load balancers, CDNs, and service meshes Preferred Qualifications Previous experience in AI/ML startups or high-growth technology companies Certifications in cloud platforms (AWS Solutions Architect, Azure DevOps Engineer, etc.) Experience with edge computing and distributed AI inference systems Knowledge of data privacy frameworks and federated learning infrastructure Familiarity with FinOps practices for cloud cost optimization Experience with service mesh technologies (Istio, Linkerd, Consul Connect) What We Offer Compensation & Benefits Competitive salary up to ₹20,00,000 per annum Comprehensive health insurance with family coverage and wellness benefits Technical Growth Access to cutting-edge AI infrastructure and latest cloud technologies Opportunity to shape the technical architecture of a groundbreaking AI product Direct collaboration with world-class AI researchers and engineers Mentorship from experienced startup founders and tech leaders Work Environment Flexible working arrangements with hybrid and remote options Modern office in Bengaluru with high-end development workstations Unlimited learning resources and access to cloud credits for experimentation Fast-paced, innovation-driven culture with direct impact on product success Regular tech talks, hackathons, and team building activities Career Impact Ground-floor opportunity in a stealth-mode AI company with massive potential Chance to build infrastructure that will serve millions of users Direct reporting to CTO/Founders with significant decision-making authority Opportunity to lead and build the DevOps team as the company scales Potential for international expansion and technology leadership roles About This Opportunity Join us at the most exciting phase of our journey. As one of our first DevOps hires, you'll have unprecedented influence over our technical infrastructure and engineering culture. This role is perfect for someone who wants to combine deep technical expertise with entrepreneurial impact in the rapidly evolving AI landscape. You'll work on challenging problems like: Scaling AI training from single GPUs to multi-node clusters Implementing real-time AI inference at global scale Building secure, compliant infrastructure for sensitive AI applications Optimizing costs while maintaining high performance for variable AI workloads Required Mindset Strong problem-solving skills with ability to debug complex distributed systems Excellent communication skills for cross-functional collaboration Passion for automation, efficiency, and engineering excellence Interest in AI/ML technology and its infrastructure challenges Note : Due to our stealth mode status, specific product and technology details will be shared during the interview process with qualified candidates who execute appropriate NDAs. Application Requirements Please submit: Detailed resume highlighting relevant DevOps and AI infrastructure experience GitHub/GitLab profile showcasing infrastructure code and automation projects Brief cover letter explaining your interest in AI DevOps and startup environments Any relevant cloud certifications, case studies, or technical blog posts We are committed to building a diverse and inclusive team. All qualified applicants will receive equal consideration regardless of race, gender, age, religion, sexual orientation, disability status, or veteran status.
Job Information Date Opened 11/09/2025 Job Type Permanent Work Experience 1-3 years Industry Public Sector and Government Salary 12LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.
Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS) AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Job Information Date Opened 12/09/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 36LPA City Bangalore Province Karnataka Country India Postal Code 560002 Job Description About Us At Scrumconnect Consulting, we work with clients to deliver high-quality digital solutions using agile methodologies. We pride ourselves on building strong, collaborative teams and leveraging cutting-edge technology to solve complex business challenges. Position: Technical Lead We are seeking an experienced Technical Lead to join our growing team. The ideal candidate will bring strong technical expertise, leadership skills, and the ability to guide teams in delivering scalable, high-performing applications. Key Responsibilities Lead and mentor a team of developers, providing technical guidance and problem-solving support. Collaborate with stakeholders to gather requirements and translate them into scalable technical solutions. Ensure adherence to coding standards, branching strategies, best practices, and project delivery timelines. Conduct code reviews and provide constructive feedback to support continuous skill development. Stay updated with emerging technologies and recommend best-fit solutions for business needs. Participate in project estimation, planning, and risk management activities. Work effectively in Agile/Scrum environments and collaborate with cross-functional teams. Required Skills & Experience Bachelor’s degree in Computer Science, Information Technology, or related field. 10+ years of software development experience, including 5+ years in a senior/lead role. Proficiency in C#, ASP.NET Core, Entity Framework, LINQ, Angular, TypeScript, HTML5, CSS . Hands-on experience with Node.js, NestJS, PostgreSQL, MySQL . Strong expertise in microservices architecture . In-depth knowledge of RESTful API design, OOP principles, asynchronous programming, and exception handling . Familiarity with Git, Hangfire, RabbitMQ, Redis Cache, ANT Design, ng-zorro, ABP Framework, Bootstrap, Typesense (preferred). Proven experience with AWS cloud services (S3, EC2, EKS, Lambda, SQS, etc.). Strong understanding of software development methodologies, tools, and processes. Excellent communication, leadership, and problem-solving skills. What We Offer Opportunity to lead impactful projects across various industries. A collaborative, agile working environment. Professional growth through mentoring and exposure to new technologies. Competitive salary and benefits package.
Job Information Date Opened 12/09/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 40 LPA City Bangalore Province Karnataka Country India Postal Code 560002 Job Description About Us At Scrumconnect Consulting, we build high-quality digital solutions using agile methodologies and modern technologies. We are looking for an Engineering Manager to lead our engineering teams, drive architectural decisions, and ensure the delivery of scalable, secure, and high-performing solutions. Position: Engineering Manager The Engineering Manager will be responsible for leading cross-functional engineering teams, defining technical strategies, and aligning technology with business objectives. This role requires strong leadership, deep technical expertise, and a passion for mentoring teams while ensuring best practices in software engineering. Key Responsibilities Lead and mentor engineering teams, fostering growth, collaboration, and innovation. Partner with stakeholders to translate business needs into technical strategies and solutions. Define and communicate best practices, design patterns, and engineering standards. Oversee architectural decisions, ensuring scalability, reliability, and security. Conduct code reviews, provide technical guidance, and ensure high-quality deliverables. Establish and maintain technology roadmaps aligned with company goals. Drive adoption of modern engineering practices, tools, and emerging technologies. Ensure compliance with security, regulatory, and governance standards. Support project planning, estimation, and timely delivery of solutions. Required Skills & Experience Cloud & Infrastructure Expertise in AWS EKS (Kubernetes) and cloud-native architectures. Strong experience with AWS services (S3, EC2, RDS – PostgreSQL, MySQL, SQL). Proficiency in serverless architectures (AWS Lambda, Azure Functions). Hands-on experience with Docker, Kubernetes. Software Architecture Deep understanding of Microservices and Event-Driven Architectures. Experience with Multi-Tenant Architectures (SaaS, PaaS, DaaS, hybrid platforms). Security & Compliance Strong knowledge of OAuth2, OpenID Connect, SAML. Experience with IdentityServer4, AWS Cognito. Expertise in API security (rate limiting, throttling). Familiarity with GDPR, CCPA, SOC 2 compliance. Experience with data governance and encryption (AWS KMS, CloudTrail). Data & Storage Expertise in MySQL, PostgreSQL and NoSQL databases (DynamoDB, MongoDB, Cassandra). Experience with search engines (Typesense, ElasticSearch) and caching (Redis, Memcached, AWS ElastiCache). Messaging & Streaming Hands-on experience with RabbitMQ, AWS SQS, Kafka. DevOps & CI/CD Experience building and managing CI/CD pipelines (Jenkins, GitLab, GitHub Actions, AWS CodePipeline). Proficiency with Infrastructure as Code (Terraform, AWS CloudFormation). Monitoring & Observability Strong experience with AWS CloudWatch, Grafana, ELK Stack. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Proven experience as an Engineering Manager, Technical Lead, or Technical Architect. Strong leadership, communication, and problem-solving skills. What We Offer Opportunity to lead and shape engineering teams working on impactful projects. A collaborative and agile work culture. Exposure to modern technologies and industry-leading practices. Competitive salary and benefits package.