About Us
We are an early-stage, deep-tech, stealth cybersecurity startup. With over 30 years of combined experience, our founding team of cybersecurity and AI experts has scaled security products from inception to millions of MAUs.
We are building core data classification primitives for LLM surfaces, uniquely combining advanced semantic classification with context-aware trust modelling to protect every prompt, output, and agent action in real time.
If you want to build foundational infrastructure for safe AI agentsand shape how enterprises monitor and secure LLM-driven systemsyoull want to talk to us.
What Youll Build?
1. AI Observability Layer
You will architect and ship the core observability infrastructure for AI agents:
- Design and implement a framework-agnostic event ingestion backend
- Define and maintain a universal agent event schema (intent, action, tool_call, state_transition, errors, etc.)
- Build high-throughput ingestion APIs in Python or Node.js
- Store and index events efficiently in Postgres / ClickHouse
- Expose APIs for dashboards, analytics, and customer usage insights
- Implement multi-tenant architecture from day one
2. Integration Adapters (“OpenTelemetry for Agents”)
You’ll build and maintain lightweight SDKs and integrations that make our platform plug-and-play for modern AI stacks:
- LangGraph adapter (Python)
- LangChain adapter (tools / chains / agents)
- Python SDK for custom agents
- Node.js SDK (optional early-stage stretch goal)
- LLM proxy layer for providers like OpenAI, Anthropic, Azure, and Mistral
- You’ll be shipping developer-first tooling that other engineers love to integrate.
3. LLM Classification & Model Training
This is not academic ML—this is pragmatic model work focused on real-world security:
- Clean and label seed datasets (jailbreak, prompt injection, benign, sensitive, malicious)
- Train small and medium classifiers on RunPod / Hugging Face
- Export models as GGUF / ONNX / quantized artifacts
- Build inference microservices and wire them into the platform
- Integrate classification outputs into events (risk tags, PII flags, high-risk action flags)
- Continuously evaluate and improve model accuracy in production-like environments
4. Engineer the Safety Layer
You will help define the safety controls that sit between AI agents and the real world:
- Implement high-risk tool call detection
- Build pre-action checks with ALLOW / BLOCK / NEEDS_REVIEW logic
- Implement PII redaction and consent verification flows
- Detect ambiguous intent and workflow drift in agent behavior
You Are a Great Fit If You Have:
Core Technical Strengths
- Strong Python engineering (FastAPI, asyncio, data pipelines), OR strong Node.js / TypeScript engineering
- Experience with LangChain / LangGraph or a strong interest in agent frameworks
- Comfort building APIs, SDKs, and developer tooling
- Experience with one or more databases: Postgres, ClickHouse, MongoDB, Redis
- Familiarity with cloud infrastructure (AWS / GCP / Azure) and Docker
- Ability to write clean, reliable, testable code
Bonus (Huge Plus)
- Hands-on experience with LLMs, embeddings, finetuning, or ML training
- Experience training models on Hugging Face datasets or RunPod
- Interest in agentic AI, security, or enterprise architectures
- Experience building observability tools (OpenTelemetry, Datadog, Sentry, etc.)
- Experience with STT/TTS pipelines (Whisper, Coqui, Azure STT)
How do We Work?
how you build
- You can move fast and ship customer-facing features end to end
- You can take a vague idea architect implement iterate
- You enjoy working on deep, technical problems at the edge of what’s possible
- You want to build an industry-shaping product, not just close tickets
- You think of code as a product, not as academic experiments
- You enjoy solving “edge of the industry” problems with real-world impact
- You are comfortable with ambiguity, fast-paced environments, and high accountability
- You are not looking for a typical 9–5 with clearly written briefs and relaxed timelines
Why Join Now
- Shape the foundational architecture of our platform
- Work directly with a founding team of cybersecurity and AI experts
- Own meaningful product surfaces used by enterprises to secure their AI agents
- You want to be part of an early team that scales into a global AI security company
- Join at a stage where your decisions, code, and ideas will have an outsized impact