Platform Architecture
How the Kapable platform components fit together
Request Flow
Platform REST API"] Proxy["kapable-proxy :3080/3081
Connect App Router"] Worker["kapable-worker
Background Jobs"] Forge["kapable-forge :3015
Pipeline Engine"] K8way["kapable-k8way :3113
AI Gateway"] KAIT["kapable-kait :3112
AI IDE Sessions"] Konductor["kapable-konductor
Session Orchestrator"] Deploy["deploy-daemon :3016
Binary Deploys"] end subgraph Apps["Incus Containers"] C1["console
RR7 + Bun"] C2["admin
RR7 + Bun"] C3["admin-htmx
Axum + Askama"] C4["developer
RR7 + Bun"] C5["user apps
6 frameworks"] end subgraph Data["Data Layer"] PG[("PostgreSQL 16
70+ tables, RLS")] WAL["WAL Consumer
pg_notify + SSE"] end User --> Caddy R1 --> API R2 --> Proxy R3 --> Proxy R4 --> Proxy R5 --> API Proxy --> C1 & C2 & C3 & C4 & C5 API --> PG Worker --> PG Forge --> PG K8way --> PG KAIT --> PG Konductor --> API Deploy --> API PG --> WAL WAL --> API classDef rust fill:#8B3D2B,stroke:#C25D43,color:#fff classDef app fill:#2D3748,stroke:#4A5568,color:#E2E8F0 classDef data fill:#1A365D,stroke:#2B6CB0,color:#BEE3F8 classDef proxy fill:#553C9A,stroke:#805AD5,color:#E9D8FD class API,Proxy,Worker,Forge,K8way,KAIT,Konductor,Deploy rust class C1,C2,C3,C4,C5 app class PG,WAL data class Caddy proxy
Rust Services
Core platform binaries, blue-green deployed
Platform REST API. Data CRUD, auth, management, container lifecycle, SSE, deploy triggers. Blue-green port slots A/B.
Reverse proxy for all Connect Apps. Routes by subdomain, enforces auth gates, handles blue-green traffic splits. Resolves container IPs from DB.
8 background workers: email delivery, webhook dispatch, WASM function execution, cron scheduler, container reconciliation, usage aggregation.
Pipeline engine for API-submitted pipelines. Executes DAG stages with matrix support, approval gates, and event streaming.
Binary deployment via Bootstrap Pipeline. Manages blue-green service slots (A/B ports), health checks, Caddy config flips. Zero-downtime deploys.
AI API gateway. Routes LLM requests, manages OAuth tokens, tracks usage/billing per consumer. BYOK support.
AI IDE session daemon. Spawns Claude Code in Incus containers with platform awareness. WebSocket terminal access.
Session orchestrator. Universal agent primitive — multi-actor, multi-harness sessions with SSE streaming and tag-based swarm management.
How Blue-Green Deploy Works
Zero-downtime deployment for both Rust binaries and Connect Apps
New container (or standby port slot) is provisioned. Old container keeps serving traffic.
Source is cloned from GitHub. For Rust apps, an external builder container compiles the release binary. For TypeScript, Bun installs deps and builds.
New instance starts with env vars from the database. Health endpoint is polled. If it fails, deploy is aborted.
Database records are updated to point to the new container IP/port. For Rust services, Caddy config is flipped to the new port. Traffic instantly routes to new version.
Old container is stopped (on success) or new container is destroyed (on failure). Auto-rollback restores the previous version if health fails.
Real-Time Data Flow
How live updates reach the browser
WAL publication"] --> WAL["WAL Consumer
pg_notify"] WAL --> BC["BroadcastManager"] BC --> SSE["SSE Endpoint
/v1/admin/sse"] SSE --> BFF["BFF Proxy
or direct"] BFF --> Hook["usePlatformSSE()
or hx-ext=sse"] Hook --> UI["UI Revalidation"] classDef db fill:#1A365D,stroke:#2B6CB0,color:#BEE3F8 classDef rust fill:#8B3D2B,stroke:#C25D43,color:#fff classDef ts fill:#2D3748,stroke:#4A5568,color:#E2E8F0 class DB db class WAL,BC,SSE rust class BFF,Hook,UI ts
PostgreSQL WAL — every INSERT/UPDATE/DELETE on published tables emits a change event via logical replication.
BroadcastManager — in-memory fan-out. Each SSE subscriber gets filtered events based on their org_id and table subscriptions.
SSE with morphing — HTMX admin uses hx-ext="sse" + idiomorph for DOM diffing. No page flash on updates.
Container Architecture
How Connect Apps run in isolation
Incus containers — each app environment runs in its own Incus (LXD) container with a dedicated IP on the 10.34.0.0/16 bridge network.
Golden images — pre-built templates for each framework (React Router, Astro, SvelteKit, Hono, Leptos, Bun). Container launch takes ~3s instead of ~60s.
DB access via socat — containers reach PostgreSQL on the host via a socat bridge proxy on 10.34.0.1:5432.
6 supported frameworks: React Router 7, Astro, SvelteKit, Hono, Leptos (Rust), Bun Server. HTMX (Rust Axum) uses the Leptos pipeline with external builder.
Env vars — stored in app_environment_vars table. Written to container .env during deploy. Never hardcoded.
Domain Routing
| Domain | Service | Port | Description |
|---|---|---|---|
| api.kapable.dev | kapable-api | 3003/3004 | Platform REST API (blue-green) |
| console.kapable.dev | proxy → console | 3005 | Org console (RR7 + Bun BFF) |
| admin.kapable.dev | proxy → admin | 3007 | Platform admin (RR7, A/B with HTMX) |
| developer.kapable.dev | proxy → developer | 3009 | Developer portal (RR7 + Bun BFF) |
| *.kapable.run | proxy | 3080/3081 | Connect Apps (subdomain routing) |
| *.tunnel.kapable.run | kapable-api | 3003/3004 | Reverse tunnels to on-prem services |
| deploy.kapable.dev | deploy-daemon | 3016 | Binary deploy API (Bootstrap Pipeline) |
| kait.kapable.dev | kapable-kait | 3112 | AI IDE session daemon |