SECURITY
Trust is engineered, not promised
Zero secrets in the binary. OS-encrypted credential storage. Private mesh networking. Defence-in-depth prompt injection controls. Built for IT review.
GOVERNANCE ARCHITECTURE
Policies broadcast out. Agent traffic stays local.
Most enterprise AI governance vendors route every agent message through their cloud so they can monitor and enforce policy. Suquo Systems inverts that topology. Your policies, processes, and tone broadcast outward to every agent in your fleet through an on-prem MCP server. The agents themselves never send their working memory back. Compliance and privacy are achieved by topology, not by trust.
ONE-WAY: OUT
Policies and process
Org tier (fleet-wide) and Team tier (per-group) policies leave the Suquo Context MCP server as standard MCP resources. Every agent on every machine reads the same governance.
STAYS LOCAL
Agent working memory
User-tier context — what an employee actually said to their agent — never travels back to the operations layer. There is no proxy, no central log of agent traffic, no vendor in the data path.
ENFORCED BY TOPOLOGY
Sealing is structural
Read-only mounts and registry-scoped distribution mean an employee cannot accidentally leak User-tier context upward, and an Org cannot accidentally leak Team-tier policy across team lines. The boundary is in the architecture, not the configuration.
SECRETS & ENCRYPTION
Where do secrets live? Nowhere you can reach.
SECRETS PATH
A layered route from config to encrypted local use
Secrets are fetched at runtime, stored locally in encrypted form, and kept away from the binary itself.
ZERO SECRETS IN THE BINARY
The installer ships with no API keys, tokens, or credentials. Secrets are fetched at runtime from a config server and stored in OS-level encrypted storage.
3-TIER CONFIG LOADING
Primary: remote config server. Fallback: OS-encrypted local cache. Emergency: non-sensitive defaults only. Offline operation uses the encrypted cache — secrets never appear in plaintext.
OS-LEVEL ENCRYPTION AT REST
Cached secrets use Windows DPAPI, bound to the OS user account. The encrypted store cannot be decrypted on another machine or by another user — even with direct disk access.
NETWORK ISOLATION VIA TAILSCALE
Config fetching, fleet sync, messaging relay, and SSH tunnelling run over a Tailscale private mesh. No public endpoints. Messaging server binds to 127.0.0.1. Zero internet-facing attack surface.
AUTHENTICATION & ACCESS CONTROL
Who can access the system? Only verified users.
ACCESS GATES
Verification before execution
Every request is checked, every sender is verified, and untrusted access is blocked before work begins.
LICENSE KEY AUTHENTICATION
Every config request includes the license key as a Bearer token, validated server-side. Invalid or revoked keys rejected immediately. The config server restricts access to registered license holders only.
HMAC-SHA256 MESSAGE VERIFICATION
Inbound messaging webhooks (WhatsApp, Slack via N8N) are HMAC-SHA256 signed and verified before processing. Telegram and Discord connect via native authenticated SDKs. No unsigned messages enter the pipeline.
SENDER VERIFICATION WITH PAIRING CODES
Unknown senders cannot issue commands. Each new sender completes a 6-digit one-time pairing challenge to link their account. Unverified messages are dropped.
RATE LIMITING AT EVERY LAYER
Config server: 10 req/user/hr. Messaging: 10 msg/sender/min. Payload: 4,000 char max. Prevents credential stuffing, abuse, and denial-of-service.
PROMPT INJECTION DEFENCE
What if someone tries to manipulate the AI?
Defence in depth — no single layer is relied upon. Every boundary validates, sanitises, and constrains.
Input Sanitization
Control characters stripped. Strict length limits enforced. Malformed or oversized payloads rejected at the boundary before reaching any AI model.
Sender Authentication Gate
Only verified, paired senders can submit prompts via messaging. Anonymous users cannot interact with the AI. External injection surface closed by default.
HMAC-Verified Command Chain
Every message in the pipeline is HMAC-signed. Commands cannot be injected mid-chain without the signing secret, which never appears in the binary or in plaintext.
Scoped Tool Execution
AI tool calls route through a typed IPC protocol with a static allowlist. Only pre-approved operations are available — no arbitrary system commands.
Session Isolation
Rolling 50-message context window per session. Auto-expiry: 60 min voice, 30 min text. Expired context is purged — no stale injection vectors persist.
Process Sandboxing
Renderer runs with contextIsolation enabled and nodeIntegration disabled. All sensitive operations go through a typed context bridge with explicit permission boundaries.
DATA SOVEREIGNTY & COMPLIANCE
Your data. Your machines. Your rules.
DESKTOP-NATIVE
Data stays on your machines. AI calls use your API keys with providers you choose. No intermediary cloud.
DATA SOVEREIGNTY
No vendor lock-in. No data mining. No model training on your content. GDPR-compliant with deletion on request.
FULL AUDIT TRAIL
Every action, command, and data access logged with timestamps and user IDs. Exportable to your SIEM or compliance system.
OPEN ARCHITECTURE
Every config file, log, and data flow is inspectable. Full architecture walkthrough included in client onboarding.
COMMON QUESTIONS FROM IT TEAMS
Questions your IT team will ask
“Does any of our data leave the machine?”
Only when you choose it. AI model calls go to your configured provider using your own API keys. Inter-machine sync runs exclusively over your private Tailscale mesh. No cloud backend touches your data.
“What happens if the config server is compromised?”
The config server serves secrets only to license-key authenticated, registered users. Rate-limited to 10 req/hr per user. Revoking a license key on the server invalidates all cached copies on next fetch.
“Can the AI execute arbitrary commands on our machines?”
No. Tool execution routes through a scoped IPC protocol. The AI can only invoke pre-approved operations. The renderer has no access to Node.js, the filesystem, or system processes.
“How do you handle employee offboarding?”
Revoke their license key on the config server. DPAPI-encrypted cached secrets become useless once the server stops accepting the key. Tailscale device removal cuts network access immediately.
“Is there an audit trail for compliance?”
Every action, tool invocation, and data access is logged with timestamps and user IDs. Logs are stored locally and can be forwarded to your SIEM or compliance system.
“What about GDPR?”
Data is stored locally — you are the data controller. Messaging pairings in Convex use JWT-validated queries with per-user isolation. Deletion: clear the local memory directory and revoke the Convex record.
Want the full security walkthrough?
We walk through the complete architecture during onboarding. Book a demo and bring your IT team.
BOOK A DEMO