Architecture¶
The XE node is structured as a set of Go packages with clear dependency boundaries. The node package ties everything together; all other packages are independently testable.
Package map¶
xe-poc-2/core/
├── cmd/
│ ├── node/ Entry point — interactive CLI, HTTP server, flag parsing
│ └── cli/ Standalone HTTP client for remote node interaction
├── core/ Domain logic — ledger, crypto, encoding, PoW, voting, quorum
├── store/ Pluggable storage — MemStore (testing), BadgerStore (production)
├── net/ libp2p networking — gossip, sync, DHT, marketplace, messaging
├── node/ Orchestration — ties all packages together into a running node
├── api/ HTTP REST API — handler, routes, CORS
├── statechain/ Deterministic state machine — DAO governance, KV store, sync
├── vm/ VM abstraction — manager interface, mock, credentials
├── directory/ P2P account directory — registration, verification, gossip
├── chat/ P2P messaging — envelope format, chat store
└── scripts/ Test and utility scripts — e2e, stress, genesis generation
Package details¶
core/ -- Domain logic¶
The heart of the system. Contains no I/O, no networking, no persistence implementation -- only pure domain logic and interfaces.
| File | Purpose |
|---|---|
types.go |
Block, Vote, Conflict, Lease, PendingSend structs; BlockType constants; ValidAssets map |
ledger.go |
Ledger struct — validates and adds blocks, manages per-account locking, delegation tracking, asset balances |
crypto.go |
KeyPair, GenerateKeyPair, KeyPairFromSeed, HashBlock (SHA-256), SignBlock, VerifyBlock (ed25519) |
encoding.go |
MarshalBlockCanonical (binary encoding for hashing), MarshalBlock (with PoW nonce), UnmarshalBlock; vote encoding |
pow.go |
blake2b proof-of-work — ComputePoW, ComputePoWConcurrent, ComputePoWWithContext, ValidatePoW |
vote.go |
VoteManager — casts and validates votes for conflict resolution |
quorum.go |
QuorumManager — tallies votes, confirms/rejects blocks at 67% weight threshold |
conflict.go |
Conflict detection — equivocation checks when two blocks share the same Previous hash |
attestation.go |
Timekeeper attestation validation for lease blocks |
store.go |
Store interface and optional interfaces (VoteStore, ConflictStore, QuorumStore, LeaseStore, etc.) |
store/ -- Pluggable storage¶
Two implementations of the core.Store interface:
MemStore-- in-memory maps, used in tests. Implements all optional interfaces (VoteStore, ConflictStore, QuorumStore, LeaseStore, DelegationStore, AtomicBlockStore).BadgerStore-- production storage using BadgerDB. Atomic block commits via BadgerDB transactions. Implements all optional interfaces.
Both stores are interchangeable. The node selects BadgerStore by default; tests inject MemStore via Config.Store.
net/ -- Networking¶
Built on libp2p. Each concern has its own gossip topic or protocol.
| File | Purpose |
|---|---|
host.go |
Creates and configures the libp2p host (TCP transport, noise security, yamux muxer) |
gossip.go |
Block gossip — publishes new blocks to the xe-blocks topic, deduplicates, validates, adds to ledger |
msg.go |
Vote gossip, marketplace gossip, directory gossip, state chain gossip — separate pubsub topics |
sync.go |
Frontier sync protocol — on peer connect, exchange frontier hashes and fetch missing blocks |
dht.go |
Kademlia DHT setup for peer discovery and routing |
messages.go |
Direct messaging — P2P stream protocol for request/response patterns (lease negotiation, chat) |
node/ -- Orchestration¶
The Node struct holds references to every subsystem and coordinates startup, background goroutines, and shutdown.
type Node struct {
Ledger *core.Ledger
Host host.Host
Gossip *xenet.Gossip
VoteGossip *xenet.VoteGossip
MarketGossip *xenet.MarketplaceGossip
StateChain *statechain.Chain
StateChainGossip *xenet.StateChainGossip
DirGossip *xenet.DirectoryGossip
Msg *xenet.Messenger
Directory *directory.Directory
DHT *dht.IpfsDHT
ChatStore *chat.ChatStore
KeyPair *core.KeyPair
VoteMgr *core.VoteManager
QuorumMgr *core.QuorumManager
VMManager vm.Manager
// ...
}
api/ -- HTTP REST API¶
A standard net/http handler with routes for:
- Account balances and chains
- Block submission and lookup
- Pending sends
- Lease management
- State chain queries
- Directory lookups
- Chat messages
- Node info and peers
See API Reference for endpoint documentation.
statechain/ -- DAO governance¶
A linear chain of signed blocks that form a deterministic state machine. Each block contains an operation (set/delete key, update config) signed by authorized keys. The state chain has its own gossip topic and sync protocol, separate from the block lattice.
vm/ -- Compute abstraction¶
Defines the vm.Manager interface for creating, inspecting, and destroying virtual machines. A mock implementation is used in testing; production providers plug in real hypervisor backends.
directory/ -- Account directory¶
A decentralized name-to-address registry. Accounts register signed entries that are propagated via gossip. Entries have a TTL and must be periodically refreshed.
chat/ -- P2P messaging¶
Envelope-based messaging between accounts. Messages are delivered via libp2p direct streams and stored in a bounded in-memory ring buffer per conversation.
Startup sequence¶
When node.New(ctx, cfg) is called:
- Open or create key pair -- loads
key.seedfrom the data directory, or generates a new one. - Open store -- creates a BadgerStore at
{dataDir}/ledger(or uses the injected store). - Create libp2p host -- TCP transport on the configured port, noise encryption, yamux multiplexing.
- Setup pubsub -- GossipSub protocol for topic-based message propagation.
- Create gossip layers -- block gossip, vote gossip, marketplace gossip, directory gossip, state chain gossip.
- Setup mDNS -- local peer discovery (unless disabled).
- Create ledger -- wraps the store with validation logic, rebuilds delegation and balance maps from existing data.
- Wire voting -- VoteManager and QuorumManager for conflict resolution; conflict callback triggers automatic voting.
- Setup frontier sync -- registers the sync protocol handler so peers exchange frontiers on connect.
- Setup DHT -- Kademlia distributed hash table for peer routing.
- Create messenger -- direct P2P streams for request/response patterns.
- Initialize state chain -- load genesis block, create chain instance, register sync protocol.
- Wire timekeeper config -- connects the state chain's
sys.timekeeperskey to the ledger's attestation validation. - Register gossip handlers -- incoming blocks are validated and added to the ledger; votes are processed; directory entries are stored.
- Dial bootstrap peers -- connects to the addresses specified in
-dial. - Start background goroutines -- lease settlement loop, stale conflict sweep, provider advertisement.
Interface-driven design¶
The core package defines a set of interfaces in store.go that storage backends may implement:
| Interface | Purpose |
|---|---|
Store |
Required — block, chain, pending send, frontier CRUD |
VoteStore |
Optional — vote persistence |
ConflictStore |
Optional — conflict and staged block persistence |
QuorumStore |
Optional — block confirmation status and heights |
LeaseStore |
Optional — lease record persistence |
DelegationStore |
Optional — account-to-representative mapping persistence |
DelegationIterator |
Optional — enumerate all delegations for rebuild on startup |
FrontierLister |
Optional — enumerate all frontiers |
AtomicBlockStore |
Required — commit block + side effects in a single transaction |
The ledger requires AtomicBlockStore at construction time and panics if the store does not implement it. The remaining optional interfaces are checked at runtime using type assertions. Both MemStore and BadgerStore implement all interfaces.
Block validation pipeline¶
The Ledger.AddBlock() method is the critical path — every block (whether from the local node, gossip, or sync) passes through this pipeline:
AddBlock(b *Block)
│
├── 1. Normalize hex fields (lowercase)
├── 2. VerifyBlock — recompute hash + check ed25519 signature
├── 3. ValidatePoW — blake2b(nonce || hash) >= difficulty
├── 4. Timestamp check — within ±1 hour of local time
├── 5. Duplicate check — block hash not already in store
├── 6. Conflict detection — check if Previous hash is shared
│ ├── No conflict → continue on main chain
│ └── Conflict → stage block, fire callback, return
│
├── 7. Type-specific validation (per-account lock held)
│ ├── send → balance sufficient, frontier matches, amount > 0
│ ├── receive → pending send exists, destination matches, balance correct
│ ├── claim → balance = old + 1
│ ├── lease → XUSD only, cost formula correct, balance sufficient
│ ├── lease_accept → lease exists, stake = cost/5, attestations valid
│ └── lease_settle → lease expired, XE emission formula, attestations valid
│
├── 8. Update in-memory state (asset balances, delegation weights)
└── 9. Write to store (atomic commit via AtomicBlockStore)
Each account has its own lock, so blocks for different accounts are validated concurrently. The per-account lock serialises operations within a single account to prevent race conditions.
Config struct¶
type Config struct {
Port int // libp2p TCP port
DialAddrs []string // bootstrap peer multiaddrs
DataDir string // persistent storage directory
Difficulty uint64 // PoW threshold (0 = disabled)
DisableMDNS bool // skip local discovery
Store core.Store // injected store (nil = BadgerStore)
Version string // node version string
Provide bool // enable compute provider mode
VCPUs uint64 // provider: vCPUs to offer
MemoryMB uint64 // provider: memory in MB
DiskGB uint64 // provider: disk in GB
GenesisBlock *statechain.Block // state chain genesis (nil = embedded)
MsgTTL time.Duration // directory registration TTL
LimactlPath string // path to limactl binary (empty = "limactl")
}