Skip to content

Production Design

PoC 4 is the production candidate. It combines heartbeat chains and merkle epochs into a three-layer architecture that compresses 24 hours of uptime proof into ~200 bytes on-chain.

Architecture Overview

Layer 3: Chained Claims
┌─────────────────────────────────────────────┐
│  24 epoch roots → merkle tree → claim root  │
│  Claims chain via prevClaimHash             │
│  ~200 bytes on-chain per claim              │
└─────────────────────────┬───────────────────┘
Layer 2: Merkle Epochs    │
┌─────────────────────────┴───────────────────┐
│  60 heartbeats → merkle tree → epoch root   │
│  Epochs chain via prevEpochHash             │
│  32 bytes per epoch                         │
└─────────────────────────┬───────────────────┘
Layer 1: Heartbeat Chain  │
┌─────────────────────────┴───────────────────┐
│  H0 ← H1 ← ... ← H59 per epoch            │
│  Dual-signed, ~250 bytes each               │
│  ~60 second interval                        │
└─────────────────────────────────────────────┘

Layer 1: Heartbeat Chain

The foundation. Every ~60 seconds, provider and consumer produce a dual-signed heartbeat that chains to the previous via SHA-256.

H0 ← H1 ← H2 ← ... ← H59

Each heartbeat contains:

  • Lease ID, sequence number, timestamp
  • Previous heartbeat hash
  • Provider ed25519 signature
  • Consumer ed25519 countersignature
  • Final hash (chains everything)

See Heartbeat Chain for the full signing protocol.

Layer 2: Merkle Epochs

Every 60 heartbeats (~1 hour), heartbeats are grouped into an epoch. The epoch is a merkle tree of heartbeat hashes, yielding a single 32-byte root.

Heartbeats H0...H59 → Merkle Tree → Epoch Root (32 bytes)

Epochs chain via prevEpochHash:

E0 ← E1 ← E2 ← ... ← E23

See Merkle Epochs for tree construction and selective disclosure.

Layer 3: Chained Claims

Every 24 epochs (~1 day), epoch roots are grouped into a claim. The claim is another merkle tree -- this time of epoch roots -- yielding a single claim root.

Epoch Roots E0...E23 → Merkle Tree → Claim Root (32 bytes)

Claims chain via prevClaimHash:

C0 ← C1 ← C2 ← ... ← Cn

Claim Structure

type Claim struct {
    LeaseID       string   // lease block hash
    ClaimIndex    uint64   // monotonic claim counter
    StartEpoch    uint64   // first epoch index in this claim
    EndEpoch      uint64   // last epoch index in this claim
    MerkleRoot    []byte   // root of epoch-root merkle tree
    PrevClaimHash []byte   // SHA-256 of previous claim
    Timestamp     int64    // unix nanos
    ProviderSig   []byte   // provider signs the claim
    ConsumerSig   []byte   // consumer countersigns
    Hash          []byte   // SHA-256 of all fields
}

The claim root (~200 bytes including signatures and metadata) is submitted on-chain as part of lease settlement.

Compression

Metric Value
Heartbeat interval 60 seconds
Heartbeats per day 1,440
Raw heartbeat data 718.9 KB
Epochs per day 24
Epoch root data 768 bytes
Claim root on-chain ~200 bytes
Compression ratio 3,681x

24 hours of uptime in 200 bytes

A full day of continuous, dual-signed uptime proof compresses to roughly the size of a single tweet on-chain. The raw data is retained off-chain by both parties for dispute resolution.

Dispute Proof Path

If a dispute arises -- for example, the consumer claims the provider was down during hour 15 -- the full proof path can be disclosed:

Step 1: Locate the claim containing hour 15
        claim.startEpoch <= 15 <= claim.endEpoch

Step 2: Provide merkle proof from epoch 15's root to claim root
        epoch root + log2(24) sibling hashes ≈ 5 hashes

Step 3: Provide merkle proof from heartbeat to epoch root
        heartbeat + log2(60) sibling hashes ≈ 6 hashes

Step 4: Verify heartbeat signatures
        Check provider and consumer ed25519 signatures
Heartbeat → Epoch Root (merkle proof) → Claim Root (merkle proof) → Ledger

The total dispute proof for a single heartbeat is approximately:

Component Size
Heartbeat data ~250 bytes
Epoch merkle proof 6 x 32 = 192 bytes
Claim merkle proof 5 x 32 = 160 bytes
Total ~602 bytes

Design Decision: No VDF

Economics over cryptography

PoC 4 deliberately omits Verifiable Delay Functions from the base protocol. The reasoning is that economics solves the collusion problem more effectively than computation.

The Economic Argument

For provider-consumer collusion to be profitable:

XE emission from fake uptime > XUSD cost of the lease

Network parameters are set so this inequality never holds. The emission rate is calibrated below the lease cost, making collusion a net loss.

Additionally:

  • Collateral enforcement. Providers stake collateral when accepting leases. Slashing on dispute makes ghost nodes unprofitable.
  • Emission caps. Per-lease emission is bounded, preventing runaway extraction.
  • Network monitoring. Sentinel nodes and fisherman protocols provide additional detection layers.

When VDF Might Return

VDFs remain available as an optional hardening layer, activatable via state chain governance. Scenarios where VDF activation might be warranted:

  • Emission parameters change such that the economic argument weakens
  • A novel attack is discovered that bypasses economic disincentives
  • High-value leases require additional assurance beyond economics

Session Model

Designed but not yet implemented

The session model is specified in PoC 5 but has not been built. It is included here as the planned production behaviour.

The Problem

The dual-signature requirement assumes both parties are online simultaneously. In practice, consumers may go offline -- for maintenance, network issues, or simply because the workload doesn't require active monitoring.

Proposed Solution

When the consumer is offline, the provider continues generating single-signed heartbeats. These are a weaker proof tier but still valuable -- they demonstrate the provider's signing key was active and producing heartbeats at the expected interval.

When the consumer comes back online, it reviews the single-signed heartbeats retroactively and can confirm or dispute them.

Proof Tiers

Tier Signatures Weight Description
Strong Provider + Consumer 100% Both parties signed in real-time
Medium Provider, consumer confirmed later 75% Provider signed, consumer reviewed retroactively
Weak Provider only, unchallenged 50% Provider signed, consumer never reviewed

Emission rates scale with proof tier. Strong proofs earn full emission; weaker tiers earn proportionally less.

Staged Rollout

The uptime system is designed for incremental deployment:

Stage Name Description
0 Heartbeat + Collateral Basic dual-signed heartbeats with economic enforcement. Minimum viable uptime proof.
1 Sessions Proof tiers for offline consumers. Retroactive confirmation.
2 Sentinel Nodes Network-operated nodes that independently verify provider uptime.
3 Fisherman Protocol Incentivised third-party verification. Anyone can challenge a provider and earn a bounty.
4 TEE Trusted Execution Environment attestation where available. Hardware-backed proof.
5 Confidential VMs Full confidential computing. Workload integrity verified by hardware.

Current status

Stage 0 is implemented in the PoC code. Stages 1-5 are designed but not yet built. Each stage adds defence-in-depth without requiring changes to earlier stages.