Technical implementation
These pages describe how airlock is put together internally — the virtualization layer, the RPC protocol between host and guest, the guest init sequence, how mounts and networking are wired up, and the on-disk layout of sandbox and cache state.
Most users will never need any of this; it’s documented for contributors, people debugging unusual failures, and anyone evaluating the security model in detail.
Overview
airlock runs untrusted code inside a lightweight Linux VM. A single
airlock binary boots a VM, pulls an OCI container image, assembles
an overlayfs rootfs, and gives the user an interactive shell (or runs
a one-off command) inside the container. The VM provides
hardware-level isolation; the container provides a familiar
image-based environment.
One vsock, one RPC connection
The central design decision: the host process and the in-VM supervisor talk over a single vsock connection carrying a single Cap’n Proto RPC session. Every cross-boundary interaction — booting the container, attaching new processes, forwarding stdio, polling stats, bridging outbound TCP, streaming tracing logs — rides that one session.
Cap’n Proto specifically (over gRPC, JSON-RPC, or a bespoke framing): its zero-copy wire format keeps the stdio hot path cheap, it treats remote interfaces as first-class values (the supervisor calls outbound TCP through a capability the host handed it, not through a URL it could fabricate), and concurrent calls and streams are interleaved on one socket without any multiplexing glue of our own. See RPC Protocol / Why Cap’n Proto for the detailed rationale.
This shapes almost everything else:
- No second transport. There is no virtio console for stdio, no
separate vsock port for networking, no control channel for
signals. Cap’n Proto RPC multiplexes many concurrent calls and
streams over the one connection, so a single
read()/write()loop in the supervisor is all the glue the VM needs. - Capabilities as plumbing. Streams like stdin, stdout polling, and outbound TCP are modelled as Cap’n Proto capabilities passed in as arguments. The supervisor doesn’t need the host’s identity or address — it just calls back through the capability it was handed. That means the VM has no egress path of any kind: the only way out is an explicit capability the host chose to grant.
- No daemonless hidden state. There is no airlock daemon on the
host, no shared socket directory, no broker. When the
airlock startprocess dies, the vsock closes, the supervisor exits, and the VM is torn down.airlock execis a thin client that reaches the same session through a Unix-socket bridge in the runningairlock startprocess. - Same wire on every platform. macOS uses host TCP (the Apple
Virtualization framework’s vsock surfaces that way) and Linux uses
real
AF_VSOCK, but the RPC schema and the code paths above it are identical.
The RPC protocol page has the full interface list; the rest of this chapter assumes this one-session model.
Components and channels
The static picture: what runs where, and how the pieces talk.
Channels shown
- vsock · RPC — the single Cap’n Proto RPC connection between the
host
airlock startprocess and the in-VM supervisor. Carries thestartcall (process + mount config + CA), ongoingexeccalls, stats polling, deny notifications, stdio, and theNetworkProxycapability the guest uses to dial out. - cli.sock — Unix-domain Cap’n Proto connection from an
airlock execinvocation to the CLI server embedded in the main process. The server merges override env onto the sandbox’s resolved base env and forwards the call onto the existing vsock. - VirtioFS (not drawn) — each directory/file mount and the
per-layer OCI cache are exported as VirtioFS shares, mounted by
init at
/mnt/<tag>, and bind-mounted into the rootfs. - TUN → TCP proxy (dashed) — all guest TCP egress routes through
airlock0, a TUN device owned by the supervisor. A userspace TCP stack (smoltcp) accepts each flow and dials back throughNetworkProxyon the vsock.
Startup flow
The dynamic picture: which component does which step, left-to-right in time.
Once the container is running, airlock exec reuses the same VM:
the invocation walks up to cli.sock, hands (cmd, args, cwd, env overrides) to the CLI server, which merges the overrides onto the
sandbox’s base env and forwards the call over the existing vsock to
airlockd, which forks a new process inside the container’s chroot.