Toku Pte Ltd

3 Job openings at Toku Pte Ltd
Database Engineer bengaluru 5 years INR 5.25 - 8.0 Lacs P.A. On-site Part Time

At Toku, we create bespoke cloud communications and customer engagement solutions to reimagine customer experiences for enterprises. We provide an end-to-end approach to help businesses overcome the complexity of digital transformation in APAC markets and enhance their CX with mission-critical cloud communication solutions. Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity and global reach to serve the diverse needs of enterprises operating regionally. As we continue creating momentum for our products in the APAC region and helping customers with their communications needs, The database engineer will be responsible for managing Toku Databases in general, and designing, planning, implementing, protecting, operating and maintaining the databases in particular. Expected Collaborations Work closely with Database Manager to align on database strategies and goals for Toku. Collaborate with Product Engineering team on database initiatives and will ensure consistent optimal delivery throughout ongoing projects to build next generation software solutions. Partner with Infra Team to provisioning, capacity planning, monitoring and maintenance. Discuss with Security team to implement security policies and privacy concerns Provide inputs to business stakeholders on their data requirements. Share knowledge and best practices related to Database Management tools and techniques with fellow team members. Tasks and Responsibilities Delivery Deploy and Manage Databases : This involves setting up and configuring databases ensuring they are operational and accessible. Backup and Recovery : Implementing reliable backup and recovery strategies to protect the data and enable quick recovery in case of failures. Database Architecture : Design and implement a highly available, scalable and disaster recovery database architecture that can handle increasing workloads. Performance Optimization : Monitor database performance, identify bottlenecks and implement optimizations to improve query response times. Database Activity Monitoring : Use monitoring tools to track database activity, identify anomalies, and proactively address potential issues. Process Improvement : Identify opportunities to automate the manual tasks to improve operational efficiency. Stakeholder Support : Collaborate with other teams to understand their needs and provide technical assistance. Database Security and Governance: Implementing strong security measures to protect sensitive data, such as access controls, encryption, and vulnerability assessments. Compliance: Adhering to relevant industry regulations and compliance standards (e.g., GDPR, HIPAA). Disaster Recovery: Developing and testing comprehensive disaster recovery plans to minimize downtime in case of major incidents. Strategy Alignment Long Term Vision : Ensure the database infrastructure can handle future growth and meet the evolving needs of the business. Database Security and Governance : Maintain the database accuracy, consistency and reliability to support Toku solutions. Adhere to database standards, security protocols and compliance regulations. Continuous Improvement : Stay informed about emerging technologies and their potential benefits. Follow industry best practices to optimize databases. Talent Actively participate in knowledge sharing and contribute to the growth and development of the Database Engineering team. Provide guidance and mentorship to fellow database engineers, offering support and training to enhance their skills and performance. Culture Maintain excellent interpersonal skills, with strong written and oral communication abilities in English. Ability to work independently and in a fast-paced, dynamic startup environment. Foster a continuous learning mindset, staying up-to-date with the latest trends and technologies. Technical Excellence Database Management Systems: Proficiency in SQL, NoSQL, and other database systems (Preferably MySQL, MariaDB, MongoDB and Oracle). Cloud Platforms: Familiarity with cloud platforms (AWS) for deploying and managing database infrastructure. Performance Optimization: Understanding of database and query optimization techniques, indexing, and performance tuning. Backup and Recovery: Knowledge of backup and recovery strategies to ensure data integrity and availability. Security: Expertise in implementing security measures to protect sensitive data. Scripting: Proficiency in scripting languages like Python, Perl, or Shell for automation and administration tasks. We would love to hear from you if you have At least Bachelor’s degree in Computer Science / Information Technology or relevant field At least 5 years of experience in database administration, information technology, database architecture, or a related field Stong proficiency in both RDBMS (MariaDB/MySQL and Postgres) and NoSQL database (MongoDB) technologies Experience with cloud database services (Amazon RDS, Aurora) Familiarity with Linux and Windows Server environments Experience in designing and implementing HA and DR database setups. Knowledge of scripting languages (Shell / Python) for automation. Knowledge of DAM (Database Activity Monitoring) tools a plus Effective communication and collaboration skills to work with diverse teams Ability to identify and resolve database issues effectively. Ability to work under pressure to meet the tight deadlines Flexibility to adopt to changing requirements and technologies Understanding of data protection regulations like the Data Protection Act.

Applied AI Engineer - LLM and NLP bengaluru 0 years INR 3.5 - 8.0 Lacs P.A. Remote Part Time

At Toku, we create bespoke cloud communications and customer engagement solutions to reimagine customer experiences for enterprises. We provide an end-to-end approach to help businesses overcome the complexity of digital transformation and deliver mission-critical CX through cloud communication solutions. Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity, and global reach to serve the diverse needs of enterprises operating at scale. Headquartered in Singapore, Toku supports customers across APAC and beyond, with a growing footprint across global markets. As an Applied AI Engineer at Toku, you will focus on building, improving, and deploying real-world AI capabilities across speech-to-text, chatbots, and large language model–driven features used in production. This role combines hands-on model development with applied research, where you will evaluate existing approaches, explore new techniques, and translate research insights into practical improvements in live systems. You will work closely with engineering teams to integrate models into production services while maintaining a strong delivery mindset. You will thrive in this role if you enjoy balancing deep technical execution with curiosity-driven, applied research that directly shapes product outcomes. What you will be doing Applied AI & Model Development: Train, fine-tune, evaluate, and improve NLP, speech-to-text, and LLM-based models used in production environments Work hands-on with chatbots, summarisation, and language understanding features, including retrieval-augmented generation (RAG) and vector-based retrieval systems Design and run model evaluations, benchmarking existing approaches and validating improvements before deployment Applied Research & Experimentation: Read, assess, and experiment with relevant AI/ML research and emerging techniques, translating promising ideas into practical, production-ready solutions Contribute to prompt design, model optimisation, and iterative experimentation to improve accuracy, latency, and reliability of deployed models Production Integration & Delivery: Integrate models into existing backend services using Python-based APIs, collaborating closely with backend engineers Ensure models are production-ready, maintainable, and resilient when deployed in live customer-facing systems Support investigation and resolution of AI-related production issues in collaboration with engineering and platform teams Collaboration & Ownership: Work closely with engineering teams to align AI capabilities with product requirements and platform constraints Communicate progress, trade-offs, and technical decisions clearly in planning and delivery discussions We’d love to hear from you if you have Core AI & LLM Expertise: Strong hands-on experience with LLMs, NLP, or speech technologies, including training, fine-tuning, and evaluating models in real-world or production contexts Practical experience with Python-based AI development (e.g. PyTorch and related ecosystems) Applied Research & Fundamentals: Hands-on experience reading, evaluating, and applying AI/ML research (e.g. papers, benchmarks, emerging techniques) and translating those insights into production-ready model improvements A strong foundation in AI/ML fundamentals (e.g. mathematics, machine learning concepts, model behaviour and evaluation), typically supported by an academic background in AI, machine learning, computer science, or a closely related field Production & Integration Experience: Experience deploying or supporting AI models in production systems, including exposure to monitoring, iteration, and real-world failure modes Ability to integrate models into existing backend services via Python APIs and work effectively within a microservices-based environment Tools & Platform Awareness: Familiarity with retrieval-augmented generation (RAG), embeddings, and vector-based retrieval systems Working knowledge of AWS-based environments and AI tooling (e.g. EC2, SageMaker, MLflow, Docker) Ways of Working: A proactive, problem-solving mindset with the ability to identify opportunities for improvement rather than waiting for direction Strong collaboration and communication skills when working with engineers across different disciplines Location: This is a remote role to be based in either the Netherlands (Rotterdam strongly preferred) or Singapore. Hong Kong based candidates can also be considered. What would you get? Training and Development Discretionary Yearly Bonus & Salary Review Healthcare Coverage based on location 20 days Paid Annual Leave (15 days for Malaysia based roles), plus other leave allowances Toku has been recognised as a LinkedIn Top Startup and by the Financial Times as one of APAC’s Top 500 High Growth Companies. If you’re looking to be part of a company on a strong growth trajectory while working on meaningful, real-world challenges, we’d love to hear from you.

AI Platform Engineer - ML Ops bengaluru 0 years INR 3.52 - 6.25 Lacs P.A. Remote Part Time

At Toku, we create bespoke cloud communications and customer engagement solutions to reimagine customer experiences for enterprises. We provide an end-to-end approach to help businesses overcome the complexity of digital transformation and deliver mission-critical CX through cloud communication solutions. Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity, and global reach to serve the diverse needs of enterprises operating at scale. Headquartered in Singapore, Toku supports customers across APAC and beyond, with a growing footprint across global markets. In this role, you will focus on enabling AI systems to run reliably, efficiently, and at scale in production. You will manage the platforms, pipelines, and infrastructure that allow applied AI engineers to deploy, monitor, and scale models across cloud environments. Success in this role depends on strong MLOps expertise, comfort with cloud-native AI workloads, and close collaboration with infrastructure and engineering teams. What you will be doing AI platform & MLOps ownership: Design, improve, and operate MLOps pipelines for training, deploying, and managing ML models in production. Model deployment pipelines: Build and maintain CI/CD-style workflows for model packaging, versioning, and deployment across environments. Cloud infrastructure for AI: Operate and optimise AWS-based infrastructure for AI workloads, including compute, storage, and networking components. GPU scaling & performance: Manage GPU-enabled workloads, addressing scalability, reliability, and cost-efficiency for high-load AI applications. Monitoring & reliability: Implement monitoring and alerting for deployed models, focusing on system health, performance, and operational stability. Tooling & standardisation: Own and evolve shared tooling such as MLflow, Docker-based workflows, and deployment frameworks to improve developer productivity. Collaboration with infra teams: Work closely with infrastructure, SRE, and engineering teams to align AI platform practices with broader system standards. Production support: Support live AI services by diagnosing deployment, scaling, and infrastructure-related issues impacting AI features. Lifecycle management: Ensure reproducibility, traceability, and governance across the full ML lifecycle, from experimentation to production. We’d love to hear from you if you have MLOps expertise: Hands-on experience building and operating MLOps pipelines for production ML systems. Cloud-native AI experience: Strong experience with AWS services used for AI workloads, including EC2, ECS, and SageMaker. Containerisation & orchestration: Practical experience with Docker and container-based deployment of ML workloads. ML tooling: Experience with MLflow or similar tools for experiment tracking, model versioning, and lifecycle management. Scalability & performance: Experience managing GPU-based workloads and addressing performance and cost challenges at scale. Infrastructure mindset: Strong understanding of cloud infrastructure concepts as they apply to ML systems. Python for ML systems: Ability to work with Python-based ML codebases to support deployment and lifecycle needs. AI awareness: Working familiarity with LLMs, NLP models, and applied ML concepts sufficient to support deployment and monitoring (without owning core model development). Production experience: Proven experience supporting live, production ML systems with real customer impact. Collaboration skills: Ability to work cross-functionally with applied AI engineers, backend engineers, and infra teams. Location: This is a remote role to be based in either the Netherlands (Rotterdam strongly preferred) or Singapore. Hong Kong based candidates can also be considered What would you get? Training and Development Discretionary Yearly Bonus & Salary Review Healthcare Coverage based on location 20 days Paid Annual Leave (15 days for Malaysia based roles), plus other leave allowances Toku has been recognised as a LinkedIn Top Startup and by the Financial Times as one of APAC’s Top 500 High Growth Companies. If you’re looking to be part of a company on a strong growth trajectory while working on meaningful, real-world challenges, we’d love to hear from you.