Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Technical implementation

These pages describe how airlock is put together internally — the virtualization layer, the RPC protocol between host and guest, the guest init sequence, how mounts and networking are wired up, and the on-disk layout of sandbox and cache state.

Most users will never need any of this; it’s documented for contributors, people debugging unusual failures, and anyone evaluating the security model in detail.

Overview

airlock runs untrusted code inside a lightweight Linux VM. A single airlock binary boots a VM, pulls an OCI container image, assembles an overlayfs rootfs, and gives the user an interactive shell (or runs a one-off command) inside the container. The VM provides hardware-level isolation; the container provides a familiar image-based environment.

One vsock, one RPC connection

The central design decision: the host process and the in-VM supervisor talk over a single vsock connection carrying a single Cap’n Proto RPC session. Every cross-boundary interaction — booting the container, attaching new processes, forwarding stdio, polling stats, bridging outbound TCP, streaming tracing logs — rides that one session.

Cap’n Proto specifically (over gRPC, JSON-RPC, or a bespoke framing): its zero-copy wire format keeps the stdio hot path cheap, it treats remote interfaces as first-class values (the supervisor calls outbound TCP through a capability the host handed it, not through a URL it could fabricate), and concurrent calls and streams are interleaved on one socket without any multiplexing glue of our own. See RPC Protocol / Why Cap’n Proto for the detailed rationale.

This shapes almost everything else:

  • No second transport. There is no virtio console for stdio, no separate vsock port for networking, no control channel for signals. Cap’n Proto RPC multiplexes many concurrent calls and streams over the one connection, so a single read()/write() loop in the supervisor is all the glue the VM needs.
  • Capabilities as plumbing. Streams like stdin, stdout polling, and outbound TCP are modelled as Cap’n Proto capabilities passed in as arguments. The supervisor doesn’t need the host’s identity or address — it just calls back through the capability it was handed. That means the VM has no egress path of any kind: the only way out is an explicit capability the host chose to grant.
  • No daemonless hidden state. There is no airlock daemon on the host, no shared socket directory, no broker. When the airlock start process dies, the vsock closes, the supervisor exits, and the VM is torn down. airlock exec is a thin client that reaches the same session through a Unix-socket bridge in the running airlock start process.
  • Same wire on every platform. macOS uses host TCP (the Apple Virtualization framework’s vsock surfaces that way) and Linux uses real AF_VSOCK, but the RPC schema and the code paths above it are identical.

The RPC protocol page has the full interface list; the rest of this chapter assumes this one-session model.

Components and channels

The static picture: what runs where, and how the pieces talk.

HOST (macOS / Linux) airlock start (main process) • config + vault + OCI pull • VM boot (hypervisor API) • VirtioFS exporter (layers, mounts) • Network proxy: rules + TLS MITM • CLI server · stdio + signal relay airlock exec (sibling invocation) • walks up for .airlock/sandbox/cli.sock • sends (cmd, args, cwd, env overrides) • no project load, no vault unlock VM (Linux, ARM64) init (initramfs) one-shot: mount shares, disk, overlay, networking · then exec(airlockd) airlockd (supervisor) • vsock server :1024 (Cap'n Proto) • spawns + supervises container • bridges guest TCP ↔ host proxy • admin HTTP @ http://admin.airlock/ container process cmd running under chroot + uid/gid vsock · RPC cli.sock · RPC exec chroot + exec TUN → TCP proxy

Channels shown

  • vsock · RPC — the single Cap’n Proto RPC connection between the host airlock start process and the in-VM supervisor. Carries the start call (process + mount config + CA), ongoing exec calls, stats polling, deny notifications, stdio, and the NetworkProxy capability the guest uses to dial out.
  • cli.sock — Unix-domain Cap’n Proto connection from an airlock exec invocation to the CLI server embedded in the main process. The server merges override env onto the sandbox’s resolved base env and forwards the call onto the existing vsock.
  • VirtioFS (not drawn) — each directory/file mount and the per-layer OCI cache are exported as VirtioFS shares, mounted by init at /mnt/<tag>, and bind-mounted into the rootfs.
  • TUN → TCP proxy (dashed) — all guest TCP egress routes through airlock0, a TUN device owned by the supervisor. A userspace TCP stack (smoltcp) accepts each flow and dials back through NetworkProxy on the vsock.

Startup flow

The dynamic picture: which component does which step, left-to-right in time.

time → User airlock (CLI) Hypervisor init airlockd container $ airlock start config + env OCI pull boot VM kernel + initramfs mount · disk · overlay · net listen vsock :1024 start RPC chroot + exec process runs relay I/O stdio

Once the container is running, airlock exec reuses the same VM: the invocation walks up to cli.sock, hands (cmd, args, cwd, env overrides) to the CLI server, which merges the overrides onto the sandbox’s base env and forwards the call over the existing vsock to airlockd, which forks a new process inside the container’s chroot.