Write Path Architecture
This appendix provides a detailed explanation of how commits flow through the Asset Core write daemon, from HTTP request to durable persistence.
Overview
The write path is designed for high throughput while maintaining strict ordering guarantees and durability semantics. It combines wide ingress with serial single-writer execution and multi-stage durability.
Detailed explanation
Pipeline Stages
The write path consists of nine stages:
- Admission: HTTP handler assigns sequence number, acquires permits
- Precheck: Parallel validation (parse, schema, auth, size, idempotency)
- Reorder Buffer: Ensures strict sequence ordering
- Single Writer: Acquires runtime lock, executes operations
- Sealing: Creates event batch, records undo log
- Append Submission: Sends batch to append worker pool
- Durability: Workers call fsync, return result
- Finalization: Updates state, responds to client
- Driver/Reader: Tails log, updates projections
Ingress Subsystem
The ingress layer accepts HTTP requests and feeds them into the pipeline:
HTTP POST → Sequence Assignment → Precheck Workers → Reorder Buffer
Key components:
- Sequence counter: Atomic u64 for strict ordering
- Reorder semaphore: Bounds buffer size
- Work queue: Bounded channel to precheck workers
This design provides backpressure when the system is overloaded.
Precheck Pipeline
Precheck workers execute validation steps in parallel:
- Parse: Deserialize JSON, compute canonical hash
- Schema: Validate operation structure
- Auth: Check permissions (if configured)
- Size: Enforce request size limits
- Idempotency: Check for duplicate commits
Each step returns Continue or ShortCircuit. Short-circuits return cached responses without entering the commit lane.
Single Writer Lane
The single writer serializes all runtime mutations:
Validated Commit → Lock Runtime → Execute Operations → Seal Batch → Submit Append
This guarantees:
- No race conditions on state
- Deterministic ordering
- Clean undo semantics
The runtime lock is held only during execution, not during I/O.
Durability Pipeline
Sealed batches flow to the append worker pool:
Sealed Batch → Append Queue → Worker Thread → Commit Log → fsync → Result
Workers are OS threads (not async tasks) to handle blocking I/O. Results flow back to the single writer for finalization.
Event Flow
Events carry hybrid payloads:
- Delta: What changed (for analytics)
- Post-state: Final value (for replay)
This enables both efficient analytics and idempotent replay.
Implementation notes
Channel Topology
The system uses bounded mpsc channels between stages:
| Channel | Sender | Receiver | Bounds |
|---|---|---|---|
work_tx | HTTP handler | Precheck manager | queue_capacity |
result_tx | Workers | Reorder buffer | Bounded |
commit_tx | Reorder buffer | Single writer | append_queue_capacity |
append_tx | Single writer | Append workers | Bounded |
append_rx | Workers | Single writer | Bounded |
Bounded queues provide backpressure and observability.
Metrics Integration
Every stage records telemetry:
- Queue depths: Visible pressure points
- Stage durations: Where time is spent
- Outcome counters: Success/failure rates
Metrics use consistent naming across all stages.
Rollback Semantics
If append fails:
- Single writer applies undo log
- Dependent commits are aborted in reverse order
- Clients receive 503 with explanation
- Metrics record the failure
This ensures the runtime never has orphaned state.
Graceful Shutdown
Shutdown proceeds in order:
- Stop accepting new requests
- Drain precheck workers
- Drain reorder buffer
- Drain single writer backlog
- Wait for append workers to finish
- Persist final checkpoints
No work is abandoned mid-flight.
When to read this
Read this appendix when:
- Debugging write latency issues
- Understanding queue depth metrics
- Modifying the ingress pipeline
- Investigating durability guarantees
For day-to-day usage, the conceptual docs are sufficient.
See also
- Runtime Model - Conceptual overview
- Freshness and Replay - How data reaches readers
- Health and Metrics - Monitoring the pipeline