Write Path Architecture

This appendix provides a detailed explanation of how commits flow through the Asset Core write daemon, from HTTP request to durable persistence.

Overview

The write path is designed for high throughput while maintaining strict ordering guarantees and durability semantics. It combines wide ingress with serial single-writer execution and multi-stage durability.

Detailed explanation

Pipeline Stages

The write path consists of nine stages:

  1. Admission: HTTP handler assigns sequence number, acquires permits
  2. Precheck: Parallel validation (parse, schema, auth, size, idempotency)
  3. Reorder Buffer: Ensures strict sequence ordering
  4. Single Writer: Acquires runtime lock, executes operations
  5. Sealing: Creates event batch, records undo log
  6. Append Submission: Sends batch to append worker pool
  7. Durability: Workers call fsync, return result
  8. Finalization: Updates state, responds to client
  9. Driver/Reader: Tails log, updates projections

Ingress Subsystem

The ingress layer accepts HTTP requests and feeds them into the pipeline:

HTTP POST → Sequence Assignment → Precheck Workers → Reorder Buffer

Key components:

  • Sequence counter: Atomic u64 for strict ordering
  • Reorder semaphore: Bounds buffer size
  • Work queue: Bounded channel to precheck workers

This design provides backpressure when the system is overloaded.

Precheck Pipeline

Precheck workers execute validation steps in parallel:

  1. Parse: Deserialize JSON, compute canonical hash
  2. Schema: Validate operation structure
  3. Auth: Check permissions (if configured)
  4. Size: Enforce request size limits
  5. Idempotency: Check for duplicate commits

Each step returns Continue or ShortCircuit. Short-circuits return cached responses without entering the commit lane.

Single Writer Lane

The single writer serializes all runtime mutations:

Validated Commit → Lock Runtime → Execute Operations → Seal Batch → Submit Append

This guarantees:

  • No race conditions on state
  • Deterministic ordering
  • Clean undo semantics

The runtime lock is held only during execution, not during I/O.

Durability Pipeline

Sealed batches flow to the append worker pool:

Sealed Batch → Append Queue → Worker Thread → Commit Log → fsync → Result

Workers are OS threads (not async tasks) to handle blocking I/O. Results flow back to the single writer for finalization.

Event Flow

Events carry hybrid payloads:

  • Delta: What changed (for analytics)
  • Post-state: Final value (for replay)

This enables both efficient analytics and idempotent replay.

Implementation notes

Channel Topology

The system uses bounded mpsc channels between stages:

ChannelSenderReceiverBounds
work_txHTTP handlerPrecheck managerqueue_capacity
result_txWorkersReorder bufferBounded
commit_txReorder bufferSingle writerappend_queue_capacity
append_txSingle writerAppend workersBounded
append_rxWorkersSingle writerBounded

Bounded queues provide backpressure and observability.

Metrics Integration

Every stage records telemetry:

  • Queue depths: Visible pressure points
  • Stage durations: Where time is spent
  • Outcome counters: Success/failure rates

Metrics use consistent naming across all stages.

Rollback Semantics

If append fails:

  1. Single writer applies undo log
  2. Dependent commits are aborted in reverse order
  3. Clients receive 503 with explanation
  4. Metrics record the failure

This ensures the runtime never has orphaned state.

Graceful Shutdown

Shutdown proceeds in order:

  1. Stop accepting new requests
  2. Drain precheck workers
  3. Drain reorder buffer
  4. Drain single writer backlog
  5. Wait for append workers to finish
  6. Persist final checkpoints

No work is abandoned mid-flight.

When to read this

Read this appendix when:

  • Debugging write latency issues
  • Understanding queue depth metrics
  • Modifying the ingress pipeline
  • Investigating durability guarantees

For day-to-day usage, the conceptual docs are sufficient.

See also