Threat Analysis¶
Red team analysis of the proof-of-uptime system. Each threat is assessed for feasibility, impact, and available mitigations.
Threat Summary¶
| Threat | Severity | Status |
|---|---|---|
| Ghost Node Attack | Critical | Mitigated (multi-layer) |
| Collusion | High | Mitigated (economics) |
| Key Compromise | Critical | Must-fix before production |
| Timestamp Manipulation | Medium | Must-fix before production |
| Performance Degradation | Medium | Out of scope (consumer responsibility) |
| Overcommitment | Medium | Not yet addressed |
Ghost Node Attack¶
Scenario: Provider accepts a lease, extracts the VM's signing key, shuts down the actual VM, and runs a lightweight sentinel process (e.g., on a Raspberry Pi) that generates heartbeats using the extracted key. The provider earns emissions without providing any compute.
Normal: Provider VM ←──heartbeats──→ Consumer
Ghost: Raspberry Pi ←──heartbeats──→ Consumer (no real VM)
Defences¶
Filesystem challenges. The consumer can issue random filesystem challenges -- requesting the hash of a specific file or directory in the VM. A ghost node without the actual filesystem cannot respond correctly.
Filesystem challenges are application-level
These are not part of the heartbeat protocol itself. They would be implemented as part of the consumer's monitoring tooling and could trigger a dispute.
Compute challenges. Similar to filesystem challenges but targeting CPU/memory. The consumer sends a computation that requires real resources (e.g., hash a large random buffer). Response time reveals whether real hardware is backing the VM.
Fisherman verification. Third-party verifiers independently probe provider VMs. See Stage 3 of the rollout plan.
Collateral slashing. If a ghost node is detected and a dispute succeeds, the provider's staked collateral is slashed. The economic loss from slashing must exceed the potential gain from running ghost nodes.
Collusion¶
Scenario: Provider and consumer agree to fabricate heartbeats without any real compute occurring. They split the XE emissions between them.
Defence: Economics¶
The primary defence is economic design:
For collusion to be profitable:
- Consumer pays XUSD for the lease.
- Provider earns XE emissions.
- Total XE earned must exceed total XUSD paid.
Network parameters ensure this inequality never holds. The colluding pair loses money on every fabricated heartbeat.
Why this works
Unlike proof-of-work where mining rewards can exceed costs, XE's emission rate is deliberately set below the lease cost. There is no profit to split -- collusion is pure loss.
Defence: VDF (Optional)¶
If economic parameters change or an edge case is discovered, VDF hardening can be activated. With VDF enabled:
- Fabricating 1 day of heartbeats takes 1 day of computation.
- The colluding pair cannot batch-generate a month of proofs in seconds.
- Even with collusion, real wall-clock time must be spent.
VDF is available as a governance-activated option via the state chain.
Key Compromise¶
Scenario: An attacker obtains a provider's or consumer's ed25519 private key. They can now sign heartbeats on behalf of the compromised party.
Current Status¶
Must-fix before production
There is no key rotation mechanism in the current implementation. A compromised key remains valid indefinitely. This is identified as a critical gap.
Required Mitigations¶
Key rotation protocol. Both provider and consumer must be able to rotate their signing keys. The new key must be registered on-chain (via a block or state chain operation) and the old key invalidated.
Rotation triggers:
- Scheduled rotation (e.g., every 30 days)
- Manual rotation on suspected compromise
- Forced rotation by governance in emergency
Backward compatibility. Old heartbeats signed with the previous key must remain verifiable. The chain of trust should record which key was active during which period.
HSM support. For providers running high-value leases, hardware security module integration would prevent key extraction entirely.
Timestamp Manipulation¶
Scenario: A provider manipulates the timestamp field in heartbeats to claim uptime during periods when the VM was actually down.
Real: H1(t=100) H2(t=160) [gap: VM down] H3(t=500)
Faked: H1(t=100) H2(t=160) H3(t=220) H4(t=280) ...
Current Status¶
Must-fix before production
There is no timestamp validation in the current implementation. Heartbeats are accepted regardless of their claimed timestamp.
Required Mitigations¶
Tolerance windows. Each heartbeat's timestamp must be within an acceptable range:
expected = prevTimestamp + interval
tolerance = 30 seconds
assert abs(heartbeat.timestamp - expected) < tolerance
Monotonicity enforcement. Timestamps must be strictly increasing:
Clock drift handling. Allow for reasonable NTP drift between provider and consumer clocks. A tolerance of 5-10 seconds handles normal drift without opening manipulation windows.
Attestation timestamps. Timekeeper attestations provide an independent timestamp that can be compared against heartbeat timestamps for consistency.
Performance Degradation¶
Scenario: Provider runs the VM but at reduced capacity -- fewer CPU cores, throttled memory, or overloaded disk I/O. Heartbeats continue normally because the signing process requires minimal resources.
Current Status¶
This attack is not addressed at the heartbeat level. Heartbeats prove presence, not performance.
Mitigation Approach¶
Performance verification is the consumer's responsibility:
- Application-level health checks
- Compute benchmarks run inside the VM
- Resource monitoring agents
- SLA-based dispute triggers
Separation of concerns
Heartbeats prove the VM exists and both parties are communicating. Performance verification is a separate problem that belongs at the application layer, not the uptime proof layer.
Overcommitment¶
Scenario: Provider accepts more leases than its hardware can support. Each individual lease appears healthy (heartbeats continue), but all VMs are degraded.
Hardware: 8 vCPU total
Lease A: 4 vCPU ─┐
Lease B: 4 vCPU ├── 12 vCPU committed, 8 available
Lease C: 4 vCPU ─┘
Current Status¶
There is no cross-lease awareness in the current system. Each lease is verified independently, with no mechanism to detect that a provider has overcommitted its resources.
Potential Mitigations¶
Resource attestation. Providers could be required to attest to their total hardware capacity. Accepting leases beyond capacity would be detectable.
Cross-lease monitoring. Sentinel nodes could track the total committed resources per provider and flag overcommitment.
Provider reputation. A reputation system based on historical performance could penalise providers that consistently underdeliver.
Hardware proofs. TEE attestation (Stage 4) could cryptographically prove the hardware resources available to a provider.
Must-Fix Items¶
The following issues must be resolved before production deployment:
| Item | Description | Severity |
|---|---|---|
| Merkle domain separation | Merkle tree nodes should be domain-separated to prevent second-preimage attacks. Leaf nodes should be prefixed with 0x00, internal nodes with 0x01. |
High |
| Timestamp validation | Add tolerance windows and monotonicity checks to heartbeat timestamps. | Critical |
| Transport layer security | Heartbeat exchange between provider and consumer must use authenticated, encrypted channels. Currently no transport security is specified. | High |
| Key rotation | Implement key rotation for both provider and consumer signing keys. | Critical |
| Dispute arbitration logic | Define the on-chain logic for evaluating dispute proofs and determining outcomes. Currently only the proof format is specified, not the arbitration. | High |
Defence in depth
No single defence is sufficient. The production system relies on layered security: economic incentives, cryptographic proofs, active monitoring, and governance-activated hardening. Each layer compensates for weaknesses in the others.
Related Pages¶
- Production Design -- the three-layer architecture and staged rollout
- Heartbeat Chain -- VDF hardening details
- Merkle Epochs -- merkle proof construction
- Compute Leasing -- lease lifecycle and collateral
- Attestations -- timekeeper attestations