Introduction
ethlambda is a minimalist, fast and modular implementation of the Lean Ethereum consensus client, written in Rust.
This book collects the design notes and operator-facing references for ethlambda. It is split into two parts:
- Consensus explains the algorithms ethlambda implements: the 3SF-mini justification and finalization rules, and the LMD-GHOST fork choice algorithm. Both documents are implementation-agnostic; ethlambda-specific behaviour is called out in blockquotes.
- Operations documents observable surfaces of a running node: Prometheus metrics, checkpoint sync, and the fork choice visualization served by the API.
For build and contribution instructions, see the
README and
CONTRIBUTING.md
in the repository.
Visual references
Two standalone HTML infographics ship alongside this book and are copied verbatim into the rendered output:
Related projects
ethlambda is one of several Lean Ethereum consensus clients under active development. For comparison and cross-client testing:
3SF-mini: Justification & Finalization
ethlambda uses 3SF-mini (Three-Stage Finality, minimal version) for justification and finalization. Unlike the Ethereum Beacon Chain’s epoch-based Casper FFG, 3SF-mini operates at the slot level: any slot can be justified, not just epoch boundaries.
Quick Example: Three Slots to Finality
4 validators, slot N already finalized and justified.
source target
│ │
▼ ▼
Slot N ──[ N-2 ]──[ N-1 ]──[ N ]
F J H
source target
│ │
▼ ▼
Slot N+1 ──[ N-2 ]──[ N-1 ]──[ N ]────[ N+1 ]
F F J H
source target
│ │
▼ ▼
Slot N+2 ──[ N-2 ]──[ N-1 ]──[ N ]────[ N+1 ]────[ N+2 ]
F F F J H
H = head J = justified F = finalized
At each slot, validators vote for the newest block as their target, citing the latest justified checkpoint as their source:
- Slot N+1: Votes
source=N, target=N+1. Three of four vote (3×3=9 >= 2×4=8), so N+1 is justified. - Slot N+2: Votes
source=N+1, target=N+2. Three of four vote, so N+2 justified. N+1 and N+2 are consecutive justifiable slots and both are justified, so N+1 is finalized.
In the ideal case, each block carries attestations that justify the parent slot and finalize the one before it. In practice, forks, missed slots, and delayed votes can break this cadence. The rest of this document explains the rules that make this work, and what happens when things go wrong.
Concepts
| Term | Meaning |
|---|---|
| Justified | A checkpoint backed by at least two-thirds of validator votes |
| Finalized | A checkpoint that can never be reverted |
| Source | The latest justified checkpoint (vote origin) |
| Target | The checkpoint being voted for (vote destination) |
| Justifiable | A slot that could become justified (per the 3SF-mini schedule) |
Justification via Supermajority
A checkpoint becomes justified when at least two-thirds of validators attest to it as a target:
JUSTIFICATION
─────────────
Validators: V0 V1 V2 V3 V4 V5 V6 V7 V8
│ │ │ │ │ │ │
└───┴───┴───┴───┴───────┴───┘
│
7 out of 9 votes
(3×7=21 >= 2×9=18) ✓
│
▼
┌──────────────┐
│ Checkpoint C │
│ JUSTIFIED ✓ │
└──────────────┘
The threshold is computed as: 3 × vote_count >= 2 × validator_count
In ethlambda: Justification and finalization are processed inside
process_attestations()incrates/blockchain/state_transition/src/lib.rs, called fromprocess_block(). The supermajority check is3 * vote_count >= 2 * validator_count.
Attestations must also pass validity checks before they count:
- Source checkpoint must already be justified
- Target must not already be justified
- Neither source nor target may have a zero-hash root
- Source slot < Target slot (time flows forward)
- Both checkpoints must reference known blocks
- Target slot must be justifiable per the 3SF-mini schedule (see below)
The Justifiability Schedule
Not every slot can be justified, only slots at specific distances from the last finalized slot. This is the novel part of 3SF-mini.
A slot is justifiable if delta = slot - finalized_slot matches any rule:
In ethlambda: The function
slot_is_justifiable_after(slot, finalized_slot)incrates/blockchain/state_transition/src/lib.rsimplements this check. It usesisqrt()for perfect square detection and the identity4n(n+1) + 1 = (2n+1)²for pronic number detection.
┌───────────────────────────────────────────────────────┐
│ JUSTIFIABILITY RULES │
│ │
│ Rule 1: delta ≤ 5 (always justifiable) │
│ │
│ Rule 2: delta = n² (perfect squares) │
│ 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, ... │
│ │
│ Rule 3: delta = n(n+1) (pronic numbers) │
│ 2, 6, 12, 20, 30, 42, 56, 72, 90, 110, ... │
│ │
└───────────────────────────────────────────────────────┘
Visualizing the first 40 slots after finalization (✓ = justifiable):
delta: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
✓ ✓ ✓ ✓ ✓ ✓ ✓ · · ✓ · · ✓ · · · ✓ · · · ✓
╰─ delta ≤ 5 ──╯ 2×3 3² 3×4 4² 4×5
delta: 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
· · · · ✓ · · · · ✓ · · · · · ✓ · · · ·
5² 5×6 6²
| delta | Rule | Formula | Gap since previous |
|---|---|---|---|
| 0–5 | 1 | ≤ 5 | - |
| 6 | 3 | 2×3 | 1 |
| 9 | 2 | 3² | 3 |
| 12 | 3 | 3×4 | 3 |
| 16 | 2 | 4² | 4 |
| 20 | 3 | 4×5 | 4 |
| 25 | 2 | 5² | 5 |
| 30 | 3 | 5×6 | 5 |
| 36 | 2 | 6² | 6 |
Key property: Gaps between justifiable slots grow, but never become infinite. As more time passes since finalization, the network gets progressively wider windows to accumulate votes. This creates a natural backpressure: if the network is struggling to reach a two-thirds majority (e.g., due to partitions or validator dropouts), the increasing gaps give more time for the supermajority to form.
Finalization
A justified checkpoint becomes finalized when it is the source of a justification whose target is the next justifiable slot. In other words, there must be no justifiable slots between source and target: the two must be consecutive entries in the justifiability schedule.
In ethlambda: The
try_finalize()function iterates over slots between source and target and callsslot_is_justifiable_afteron each. If any slot is justifiable, finalization fails (source and target aren’t consecutive). The check usesoriginal_finalized_slot(the finalized slot at the start of block processing), not the current one, since finalization can advance mid-processing.
FINALIZATION CHECK
──────────────────
Example 1: Finalization FAILS
Finalized=10 Source=13 (justified) Target=16 (justified)
[ 10 ] · · · [ 13 ] 14 15 [ 16 ]
▲ ▲
│ └── delta=5 ≤ 5 → justifiable!
└────── delta=4 ≤ 5 → justifiable!
Justifiable slots exist between S and T → NOT FINALIZED ✗
(13 and 16 are not consecutive justifiable slots)
Example 2: Finalization SUCCEEDS
Finalized=10 Source=16 (justified) Target=19 (justified)
[ 10 ] · · · [ 16 ] 17 18 [ 19 ]
▲ ▲
│ └── delta=8 → not justifiable ✓
└────── delta=7 → not justifiable ✓
No justifiable slots between S and T → S is FINALIZED ✓
(16 and 19 are consecutive: delta=6=2×3, then delta=9=3²)
The reasoning: if a justifiable slot exists between source and target, validators could have directed their votes to that intermediate slot instead, potentially on a different fork. By requiring source and target to be consecutive justifiable slots, the protocol ensures that no alternative justification path can exist between them.
Justifiable Slot Backoff
The justifiability schedule acts as a backoff mechanism to increase finalization rate
during periods of asynchrony. By “diluting” the possible targets of a justification
vote (via the slot_is_justifiable_after function), the protocol increases the window
during which votes for a given slot can be included, improving the chances of achieving
the required two-thirds majority.
Since finalization requires two consecutively justifiable slots to both be justified, this backoff isn’t immediately reset after finalization occurs; it only lowers over time when synchrony is restored.
Example: Extended asynchrony with gradual recovery.
F=0. Justifiable slots grow sparser as delta increases:
delta ≤ 5: 0 1 2 3 4 5 (gap = 1)
delta 6–20: 6 9 12 16 20 (gap = 3–4)
delta 20–36: 20 25 30 36 (gap = 5–6)
...
delta ~1000: 900 930 961 992 1024 (gap = 30–32)
30² 30×31 31² 31×32 32²
Phase 1: Long asynchrony, slow progress.
Validators vote, but with many justifiable targets, votes scatter
and no single slot reaches >=2/3. As gaps widen, votes concentrate.
Near slot 1000, the 32-slot gap between 992 and 1024 means
no competing justifiable target exists for 32 slots after 992.
All votes funnel toward 1024 once it is built.
Phase 2: Slot 992 finalized.
Slot 992 justified (source = earlier justified slot).
Slot 1024 justified (source = 992).
slot: 0 ... 992 1024
F J ······· J
▲ ▲
source ──────────▶ target
Slots 993–1023: any justifiable from F=0?
Perfect squares? 31²=961 (before), 32²=1024 (boundary). None.
Pronic? 31×32=992 (boundary), 32×33=1056 (after). None.
No justifiable slots between them → slot 992 FINALIZED ✓
Phase 3: Partial reset. Backoff shrinks but doesn’t vanish.
New F=992. Justifiable slots shift:
slot: 992 993 994 995 996 997 998 ··· 1001 ··· 1004 ··· 1008 ··· 1022 ··· 1028
F ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
╰── delta ≤ 5 ──╯ 2×3 3² 3×4 4² 5×6 6²
Dense slots 993–998 are already in the past!
Near the current slot (~1024), justifiable slots are ~6 apart:
... 1022 1028 1034 1041 ...
δ=30 δ=36 δ=42 δ=49
5×6 6² 6×7 7²
└──6──┘ └──6──┘ └──7──┘
Gaps shrank from 32 → 6, but didn't reset to 1.
Phase 4: Further finalization closes the gap.
Justify 1022 and 1028, finalize 1022. New F=1022.
From F=1022, at slot ~1028 (delta = 6):
slot: 1022 1023 1024 1025 1026 1027 1028
F ✓ ✓ ✓ ✓ ✓ ✓
╰────── delta ≤ 5 ──────╯ 2×3
Gaps are back to 1. Fast finalization resumes.
Summary of gradual recovery:
┌───────────────────┬──────┬───────┬───────┬──────────────┐
│ Finalization step │ F │ Head │ Delta │ Nearby gaps │
├───────────────────┼──────┼───────┼───────┼──────────────┤
│ Before any │ 0 │ ~1000 │ ~1000 │ 31–32 │
│ After 1st (992) │ 992 │ ~1024 │ ~32 │ 6–7 │
│ After 2nd (1022) │ 1022 │ ~1028 │ ~6 │ 1 │
└───────────────────┴──────┴───────┴───────┴──────────────┘
Each finalization step reduces the delta between the finalized
slot and the chain head, progressively tightening the gaps.
When finalization advances, the following cleanup occurs:
justified_slotswindow shifts forward (old slots pruned)LiveChainentries for finalized slots are pruned- Gossip signatures and aggregation proofs for finalized blocks are cleaned up
- Future fork choice runs start from the finalized slot’s successor
In ethlambda: The
justified_slotsbitlist uses relative indexing (index 0 =finalized_slot + 1). When finalization advances,shift_window()incrates/blockchain/state_transition/src/justified_slots_ops.rsdrops the now-finalized prefix. The attestation target is also walked back to the nearest justifiable slot viaslot_is_justifiable_afterincrates/blockchain/src/store.rs.
End-to-End: From Head Selection to Finalization
This section connects LMD-GHOST fork choice with 3SF-mini. The quick example above showed the happy path; here we focus on what happens when things go wrong.
Recap: Attestation Anatomy
Each attestation carries three checkpoints, each determined by a different mechanism:
┌────────────────────────────────────────────────────────────────┐
│ ATTESTATION │
│ │
│ head Newest block the validator sees │
│ ← LMD-GHOST with min_score = 0 │
│ │
│ target Block the validator wants justified next │
│ ← Derived from safe target, walked back to nearest │
│ justifiable slot (feeds into 3SF-mini) │
│ │
│ source Latest justified checkpoint │
│ ← Read from store state │
└────────────────────────────────────────────────────────────────┘
The safe target is computed by running LMD-GHOST with a two-thirds vote threshold. Only blocks backed by a supermajority qualify, so the safe target is always at or behind the head. The attestation target is derived by walking back from the head toward the safe target (max 3 steps), then to the nearest justifiable slot. See Safe Target Selection for details.
In ethlambda:
get_attestation_target()incrates/blockchain/src/store.rsimplements this walk-back.JUSTIFICATION_LOOKBACK_SLOTS = 3provides a liveness guarantee: even if the safe target is stuck, the target eventually advances once the head moves far enough ahead.
Lagging Safe Target (Fork with Delayed Convergence)
When validators disagree about the head, the safe target lags behind: no single branch has two-thirds support. This delays justification until the fork resolves.
Setup: 9 validators, finalized=100, justified=101
Safe target threshold: >=6 votes (2/3 of 9)
Slots 102–103: Fork splits votes. No progress.
┌──[ B102a ]──[ B103a ] V0–V4 (5)
[ F=100 ]──[ J=101 ]─┤
└──[ B102b ]──[ B103b ] V5–V8 (4)
Neither branch clears two-thirds → safe target stuck at B101. Walk-back from head always lands on source (B101). No attestation can advance justification.
Slot 104: V7 and V8 switch sides. Fork resolves.
V7 and V8 receive B102a (delayed by the partition) and switch to the a-branch.
┌──[ B102a ]──[ B103a ]──[ B104a ] V0–V4, V7, V8 (7)
[ F=100 ]──[ J=101 ]─┤
└──[ B102b ]──[ B103b ]──[ B104b ] V5–V6 (2)
B102a subtree now has 7 votes >= 6 → safe target = B102a. Walk-back from B104a lands on B102a (2 steps). Slot 102 is justifiable (delta=2 ≤ 5).
source=101 ──▶ target=102 7/9 votes → 3×7=21 >= 2×9=18 → JUSTIFIED ✓
Finalization: no slots between 101 and 102 → 101 FINALIZED ✓
After slot 104: finalized=101, justified=102.
Slots 105–106: Full convergence and recovery.
All 9 validators on the a-branch. Slot 105: target=B104a → B104a JUSTIFIED. But finalization fails: slot 103 (between source=102 and target=104) is justifiable but was never justified (lost in the fork).
Slot 106: target=B105a → B105a JUSTIFIED. No justifiable slots between 104 and 105 → 104 FINALIZED. Finalization jumped from 101 to 104, skipping 102 and 103.
FORK WITH DELAYED CONVERGENCE
═════════════════════════════
Slot: 100 101 102 103 104 105 106
Status: F J · · · · ·
fork ──────┤
resolves
Head: · B101 B102a B103a B104a B105a B106a
Safe: · B100 B101 B101 B102a B104a B105a
stuck ─────┘ ▲
│
V7+V8 switch, safe target unsticks
Justified: · 101 ─ ─ 102 104 105
Finalized: · · ─ ─ 101 ─ 104
▲
finalization jumps ──┘
(102,103 skipped; 103 was never justified)
Comparison with Casper FFG
Both 3SF-mini and Casper FFG are finality gadgets built on the same foundation: supermajority links between checkpoints. They differ fundamentally in their unit of time and what that implies for validator participation. For a thorough treatment of Casper FFG as used in Ethereum, see the eth2book chapter on Casper FFG.
Slots vs Epochs: The Core Architectural Split
3SF-mini: Every Validator, Every Slot
In 3SF-mini, all validators vote in every slot. A checkpoint can be justified at any slot (subject to the justifiability schedule), and finalization can happen as soon as two consecutive justifiable slots are both justified.
3SF-mini (4-second slots, 4 validators)
Slot 100 Slot 101 Slot 102 Slot 103
┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐
│V0 V1 │ │V0 V1 │ │V0 V1 │ │V0 V1 │
│V2 V3 │ │V2 V3 │ │V2 V3 │ │V2 V3 │
└───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘
│ │ │ │
4 votes 4 votes 4 votes 4 votes
per slot per slot per slot per slot
Every validator participates in every slot.
>=2/3 threshold checked per-slot → can justify any slot.
This is simple and fast, but it means every validator must produce and verify a vote
every slot. The total message load scales as validators × slots.
Casper FFG: Validators Split Across an Epoch
Ethereum’s beacon chain has ~1,000,000 active validators. Having all of them vote every 12-second slot would be unmanageable. Instead, Casper FFG groups 32 slots into an epoch, and splits the validator set across the slots within it:
Casper FFG (12-second slots, 32 per epoch, ~900k validators)
Epoch N
┌─────────────────────────────────────────────────────────────┐
│ Slot 0 Slot 1 Slot 2 ... Slot 30 Slot 31 │
│ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │~28125│ │~28125│ │~28125│ ... │~28125│ │~28125│ │
│ │valids│ │valids│ │valids│ │valids│ │valids│ │
│ └──┬───┘ └──┬───┘ └──┬───┘ └──┬───┘ └──┬───┘ │
│ │ │ │ │ │ │
└────┼───────────┼───────────┼───────────────┼───────────┼────┘
└───────────┴───────────┴───┬───────────┴───────────┘
│
All ~900k votes
collected over 32 slots
│
▼
Epoch checkpoint
(first slot of epoch)
Each validator attests exactly ONCE per epoch.
The full >=2/3 tally is only meaningful at epoch boundaries.
Each validator is shuffled into a committee assigned to one specific slot. Within that slot, the committee may be further split (up to 64 sub-committees) for parallel aggregation. The result: each validator only attests once per epoch, and the network processes ~28,000 attestations per slot instead of ~900,000.
The trade-off:
| 3SF-mini | Casper FFG | |
|---|---|---|
| Who votes when | All validators, every slot | Each validator once per epoch (in its assigned slot) |
| Messages per slot | N (all validators) | N / 32 (one committee) |
| Supermajority known after | 1 slot (all votes in) | 1 epoch (need all 32 committees) |
| Fastest finalization | 2 slots = 8 seconds | 2 epochs = ~12.8 minutes |
| Practical validator limit | Hundreds–thousands | Millions |
Epochs exist because of a scalability constraint, not a protocol-theory preference. If you could process a million votes per slot, Casper FFG wouldn’t need epochs at all. 3SF-mini sidesteps this by targeting a smaller validator set, which lets it operate at slot granularity.
Finalization Logic
Both require a chain of justified checkpoints, but the rules differ in what they check.
Casper FFG uses k-finality. The original rule (k=1) requires a direct supermajority link from a checkpoint to its immediate successor: justify epoch N+1 with source=N, and N is finalized. Ethereum generalizes this to k=2, which handles the case where the network falls slightly behind:
Casper FFG — 1-finality (ideal case):
Epoch N Epoch N+1
┌─────┐ ┌─────┐
│ CP │══════▶│ CP │ Supermajority link N → N+1
│ J ✓ │ │ │
└─────┘ └─────┘
Processing this link:
1. Epoch N+1 becomes JUSTIFIED (target of a supermajority link)
2. Epoch N becomes FINALIZED (direct successor justified)
Casper FFG — 2-finality (one epoch behind):
Epoch N Epoch N+1 Epoch N+2
┌─────┐ ┌─────┐ ┌─────┐
│ CP │ │ CP │ │ CP │
│ J ✓ │ │ J ✓ │ │ │
└─────┘ └─────┘ └─────┘
│ │
└══════ supermajority ══════┘
link N → N+2
The direct link N→N+1 didn't form in time.
Instead, a link forms from N→N+2. Processing this link:
1. Epoch N+2 becomes JUSTIFIED (target of a supermajority link)
2. Epoch N becomes FINALIZED (all intermediates are justified)
The 2-finality rule is a recovery mechanism: even if the network missed the ideal one-epoch finalization window, it gets a second chance. Ethereum tracks the justification status of the last 4 epoch boundaries to detect both cases. In practice, most finalization happens via 1-finality during normal operation; 2-finality kicks in during brief network hiccups.
3SF-mini takes a different approach entirely:
Slot S Slot T
┌─────┐ ┌─────┐
│ CP │──────▶│ CP │ No justifiable slots exist
│ J ✓ │ │ J ✓ │ between S and T
└─────┘ └─────┘
∴ Slot S is FINALIZED
Rule: Finalized when NO intermediate checkpoints could exist
Instead of checking that intermediate checkpoints are justified, 3SF-mini checks that no intermediate checkpoints could exist at all. This is a stronger guarantee: validators’ votes between source and target could only have gone to the target, since there’s nowhere else to direct them. This structural property is also why 3SF-mini doesn’t need Casper’s surround-vote slashing condition.
Casper’s k-finality is essentially a tolerance parameter: “how many epochs behind can we be and still finalize?” Ethereum chose k=2, meaning it tolerates one missed epoch. 3SF-mini doesn’t need this concept because the justifiability schedule itself adapts. Instead of tolerating missed windows, it makes the windows wider when the network is struggling.
Adaptive Backoff (unique to 3SF-mini)
Casper FFG has a fixed checkpoint every epoch, regardless of network conditions. 3SF-mini’s justifiability schedule adapts: gaps between justifiable slots grow under prolonged asynchrony (via the perfect square and pronic number rules), creating natural vote concentration when the network is struggling to reach a two-thirds majority. Casper FFG has no equivalent; its epoch spacing is the same whether the network is healthy or partitioned. See Justifiable Slot Backoff for a detailed walkthrough.
👻 LMD-GHOST fork choice algorithm
A deep dive into how the LMD-GHOST (Latest Message Driven, Greedy Heaviest Observed SubTree) fork choice algorithm works. LMD-GHOST is the fork choice rule used by Ethereum’s consensus layer and its derivatives. Each validator’s latest attestation is their single active vote, and the algorithm follows the heaviest branch at every fork.
This document is implementation-agnostic, with ethlambda-specific details called out in blockquotes marked “In ethlambda”.
Much of the conceptual framing in this document is inspired by Ben Edgington’s Eth2 Book, particularly the LMD GHOST chapter. Highly recommended reading for anyone interested in Ethereum consensus.
Background & History
The GHOST protocol was introduced by Sompolinsky and Zohar in a 2013 paper. Its core idea: instead of choosing the heaviest chain, we choose the heaviest subtree, counting orphaned blocks as evidence of support for their ancestors.
The “LMD” in LMD-GHOST stands for Latest Message Driven: only each validator’s most recent attestation counts, preventing vote amplification. LMD-GHOST is the fork choice rule used by the Ethereum Beacon Chain and Lean Ethereum.
Why Fork Choice?
In a distributed system where validators propose blocks concurrently, the blockchain can fork: two valid blocks may appear at the same slot, creating competing chains. The fork choice rule answers a critical question:
Which chain tip should I follow?
┌──────────┐
┌────▶│ Block C │ ← Chain tip 1
│ │ slot 5 │
┌──────────┐ │ └──────────┘
│ Block A │─┤
│ slot 3 │ │ ┌──────────┐
└──────────┘ └────▶│ Block D │ ← Chain tip 2
│ slot 5 │
└──────────┘
Which tip should validators follow?
Every node in the network must be able to independently arrive at the same answer using only its local view of blocks and attestations. The fork choice rule is what makes this possible. It is a deterministic function from a node’s observed state to a single chain tip.
From Heaviest Chain to Heaviest Subtree
The simplest fork choice rule is heaviest chain: follow the chain tip with the most accumulated weight. This works when fork rates are low, but breaks down when honest validators fork within a common branch:
HEAVIEST CHAIN vs HEAVIEST SUBTREE
──────────────────────────────────
An attacker with 40% of stake forks at A.
The honest majority (60%) builds on B but forks into C and D:
┌───B──┬──C V0, V1, V2 vote for C (30%)
A ────┤ └──D V3, V4, V5 vote for D (30%)
│
└───X──Y──Z V6, V7, V8, V9 vote for Z (40%)
Heaviest chain:
Z has 40% of votes, C and D each have 30%.
Attacker wins! ✗
Heaviest subtree (LMD-GHOST):
At A: B subtree has 60% (C + D), X subtree has 40%.
Pick B. Then at B: C has 30%, D has 30% (tiebreaker).
Honest majority wins. ✓
LMD-GHOST is strictly better when honest validators fork within a common subtree. Instead of requiring all honest validators to agree on a single chain tip (which is impossible under network delay), it aggregates their support at each level of the tree.
How Subtree Weight Works (the “GHOST” Part)
The key insight behind the “Heaviest Observed SubTree” part of LMD-GHOST: a vote for a block is implicitly a vote for all its ancestors.
When a validator attests to block F as their head, they are also expressing support for every block on the path from the root to F:
Validator attests: head = F
A ── B ── C ── D ── E ── F
▲ ▲ ▲ ▲ ▲ ▲
│ │ │ │ │ │
└────┴────┴────┴────┴────┘
All ancestors implicitly supported
This is why LMD-GHOST counts the subtree weight: a block’s weight includes every attestation for any of its descendants, because those attestations implicitly endorse the ancestor too. The algorithm exploits this by walking backward from each attested head and incrementing every block along the path.
LMD: Why Only the Latest Message?
The “LMD” in LMD-GHOST stands for Latest Message Driven. Each validator’s most recent attestation is their only vote. All previous attestations are discarded.
Validator 7's attestation history:
Slot 10: attests to head = B ← discarded
Slot 11: attests to head = C ← discarded
Slot 12: attests to head = E ← THIS is the active vote
Only the slot 12 attestation counts for fork choice.
Why only the latest? Two reasons:
-
Prevents double-voting. If all messages counted, a validator could cast many attestations and amplify their influence. With LMD, each validator gets exactly one active vote regardless of how many attestations they’ve broadcast.
-
Reflects current knowledge. A validator’s latest attestation reflects their most recent view of the chain. Older attestations may reference blocks that are no longer on the best chain. Keeping only the latest ensures fork choice uses the most up-to-date information.
The fork choice store maintains a mapping of validator_index → latest attestation.
When a new attestation arrives from a validator, it replaces their previous entry:
Fork choice store (latest messages):
┌──────────────┬──────────────────────────────┐
│ Validator │ Latest Attestation │
├──────────────┼──────────────────────────────┤
│ 0 │ head=E, target=C, source=A │
│ 1 │ head=D, target=C, source=A │
│ 2 │ head=E, target=C, source=A │
│ 3 │ head=F, target=D, source=A │
│ ... │ ... │
└──────────────┴──────────────────────────────┘
One row per validator. New attestation → overwrite row.
LMD-GHOST Step by Step
The algorithm takes a set of inputs and produces a single block root: the head of the chain.
Inputs
| Input | Purpose |
|---|---|
| Start root | The justified checkpoint (root of the subtree to search) |
| Block tree | The set of known blocks: root → (slot, parent) |
| Attestations | Latest message per validator: validator_index → attestation |
| Min score | Minimum weight for a branch to be considered (0 = follow any branch; higher = conservative) |
In ethlambda: The function is
compute_lmd_ghost_head()incrates/blockchain/fork_choice/src/lib.rs. The block tree comes from theLiveChainstorage index, andmin_scoreis 0 for head selection or ⌈2V/3⌉ for safe target computation.
The Algorithm
First, accumulate weights. Each attestation “paints” the path from its head back to the start root. In the simplest form (equal-weight validators), this adds +1 to every block on the path. In systems with balance-weighted voting, the validator’s effective balance is added instead.
Validator 0 attests to head = F
J ─ A ─ B ─ C ─ D ─ E ─ F (J = justified root)
+1 +1 +1 +1 +1 +1 J is at start_slot, not counted
Validator 1 attests to head = D
J ─ A ─ B ─ C ─ D
+1 +1 +1 +1
Accumulated weights:
Block: J A B C D E F
Weight: ─ 2 2 2 2 1 1
│
└ start_root (not weighted, used as the descent origin)
In ethlambda: All validators have equal weight (+1 per vote). The Ethereum Beacon Chain instead weights votes by effective balance (up to 2048 ETH).
Then, greedily descend. Starting from the start root, at each node pick the child with the most weight. Repeat until reaching a leaf:
J ──┬── B (5) ← pick B (higher weight)
└── G (2)
B ──┬── C (3) ← pick C (higher weight)
└── H (2)
C ──── D (3) ← only child, continue
D ── (no children) → HEAD = D!
Children below min_score are ignored during the descent. With min_score = 0
(normal head selection) all children are visible. With a higher threshold, only
branches with strong support are followed. This is used for
safe target selection.
The Tiebreaker
When two children have exactly equal weight, a deterministic tiebreaker is needed. Without one, different nodes could pick different heads from the same data, breaking consensus. The tiebreaker is lexicographically higher block root hash, i.e., higher hash value wins.
Equal weight scenario:
Parent
│
┌───┴───┐
B (3) C (3) ← Equal weight!
root: root:
0x3a.. 0x7f.. ← 0x7f > 0x3a, so pick C
The choice of “higher hash wins” is a convention. Any deterministic rule would work; what matters is that all nodes apply the same one.
Worked Example: Head Selection
Consider a network with 5 validators (indices 0–4) and the following block tree
rooted at the justified checkpoint J at slot 10:
BLOCK TREE
──────────
Slot 10 ┌──────┐
(justified) │ J │ ← Justified checkpoint (start_root)
└──┬───┘
│
Slot 11 ┌──┴───┐
│ A │
└──┬───┘
┌──┴────────┐
│ │
Slot 12 ┌──┴───┐ ┌──┴───┐
│ B │ │ C │
└──┬───┘ └──┬───┘
│ │
Slot 13 ┌──┴───┐ ┌──┴───┐
│ D │ │ E │
└──────┘ └──────┘
Latest attestations (one per validator):
| Validator | Attested Head | Path back from head to J |
|---|---|---|
| 0 | D | D → B → A → (J) |
| 1 | D | D → B → A → (J) |
| 2 | E | E → C → A → (J) |
| 3 | E | E → C → A → (J) |
| 4 | E | E → C → A → (J) |
Accumulate weights by walking backward from each attested head, adding +1 per block (stopping at J’s slot):
V0 (head=D): D+1 B+1 A+1
V1 (head=D): D+1 B+1 A+1
V2 (head=E): E+1 C+1 A+1
V3 (head=E): E+1 C+1 A+1
V4 (head=E): E+1 C+1 A+1
| Block | Weight | Explanation |
|---|---|---|
| A | 5 | On path of all 5 validators |
| B | 2 | On path of V0, V1 |
| C | 3 | On path of V2, V3, V4 |
| D | 2 | Head of V0, V1 |
| E | 3 | Head of V2, V3, V4 |
Greedily descend from J, always picking the heaviest child:
Start at J
└─▶ A (only child, weight 5)
├── B (weight 2)
└── C (weight 3) ← Pick C (3 > 2)
└─▶ E (only child, weight 3)
└─▶ No children → HEAD = E ✓
Result: The canonical head is Block E. Even though both branches have the same depth, the C→E branch has 3 votes vs B→D’s 2 votes.
RESOLVED HEAD
─────────────
Slot 10 ┌──────┐
│ J │
└──┬───┘
│
Slot 11 ┌──┴───┐
│ A │ ✓ canonical
└──┬───┘
┌──┴────────┐
│ │
Slot 12 ┌──┴───┐ ┌──┴───┐
│ B │ │ C │ ✓ canonical (weight 3 > 2)
└──┬───┘ └──┬───┘
│ │
Slot 13 ┌──┴───┐ ┌──┴───┐
│ D │ │ E │ ★ HEAD
└──────┘ └──────┘
What If a Vote Changes?
Suppose validator 1 now sees block E and switches their attestation from D to E:
Before: V0=D, V1=D, V2=E, V3=E, V4=E → Head = E (3 vs 2)
After: V0=D, V1=E, V2=E, V3=E, V4=E → Head = E (4 vs 1)
The head didn't change, but the margin increased from 1 to 3.
If instead V2 and V3 had switched to D:
After: V0=D, V1=D, V2=D, V3=D, V4=E → Head = D (4 vs 1)
The head reorgs from E to D.
Fork Choice vs Finality
An important conceptual distinction: LMD-GHOST provides fork choice, not finality.
LMD-GHOST gives the network a way to agree on the current head of the chain at any moment, but the head can change. A block selected by fork choice today could be reorged away tomorrow if attestations shift. LMD-GHOST alone provides no guarantee that any block is permanent.
Finality, the guarantee that a block can never be reverted, comes from a separate mechanism called a finality gadget. LMD-GHOST is designed to compose with any finality gadget (e.g., Casper FFG in the Ethereum Beacon Chain, or 3SF-mini in Lean Ethereum).
┌────────────────────────────────────────────────────┐
│ CONSENSUS = TWO LAYERS │
│ │
│ ┌─────────────┐ ┌──────────────────────┐ │
│ │ LMD-GHOST │ │ Finality Gadget │ │
│ │ │ │ │ │
│ │ "Which tip │ │ "Which blocks are │ │
│ │ is best │ │ permanent and can │ │
│ │ right now?"│ │ never be reverted?" │ │
│ │ │ │ │ │
│ │ Dynamic, │ │ Monotonic, only │ │
│ │ can reorg │ │ moves forward │ │
│ └──────┬──────┘ └──────────┬───────────┘ │
│ │ │ │
│ └──────────┬───────────────┘ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Full Consensus │ │
│ └──────────────────┘ │
└────────────────────────────────────────────────────┘
In ethlambda: The finality gadget is 3SF-mini, which operates at the slot level rather than epoch boundaries.
The two layers interact: LMD-GHOST runs its greedy descent starting from the latest justified checkpoint (not genesis). This means finality constrains fork choice: once a checkpoint is finalized, no fork choice run will ever consider blocks before it.
┌─────────┐ ┌─────────┐ ┌──── ...
│FINALIZED│────────▶│JUSTIFIED│────────▶│ fork choice
│ slot 50 │ │ slot 55 │ │ runs here
└─────────┘ └─────────┘ └──── ...
│ │
│ └── start_root for LMD-GHOST
│
└── everything before this is permanent
This has a major practical benefit: finality allows aggressive pruning of the block tree. Without finality, fork choice would need to consider every block since genesis, and the tree would grow without bound. With finality, all blocks at or before the finalized checkpoint can be discarded from the fork choice’s working set.
In ethlambda: The
LiveChainindex (the in-memory block tree used by fork choice) is pruned every time finalization advances, keeping it bounded to only the non-finalized portion of the chain.
Attestation Pipeline
In a naive implementation, every attestation would influence fork choice the instant it arrives. This creates problems: validators with faster network connections see different heads than slower ones, and the proposer’s view of the chain could shift mid-block-construction.
Lean Ethereum solves this with a staged promotion pipeline: attestations are collected into a pending set and only promoted to the active fork choice set at designated moments. This ensures all validators operate on a consistent view.
ATTESTATION LIFECYCLE
─────────────────────
┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Network │ │ Pending │ │ Active │
│ (gossip) │──────▶│ Attestations │──────▶│ Attestations │
│ │ │ │ │ │
└──────────────┘ └──────────────────┘ └──────────────────┘
│ │
NOT used for Used for fork choice
fork choice weight calculations
│ │
Promoted at ─────────────▶ designated intervals
fixed points
In ethlambda: The two stages are called “new” and “known” attestations, stored in
LatestNewAttestationsandLatestKnownAttestationstables respectively. Promotion happens at tick intervals 0 (if proposing) and 3 (end of slot).
Why Staged Promotion?
The staged design serves two purposes:
-
Consistency: All validators promote attestations at the same moments, reducing divergence in head selection. Without batching, validators with faster network connections would see different heads than slower ones.
-
Proposer fairness: The proposer computes the block against a known, fixed set of attestations. If new attestations could influence fork choice mid-computation, different validators might disagree on the head.
On-Chain vs Off-Chain Attestations
Attestations arrive from two sources, and how they enter the pipeline matters:
| Source | Enters As | Reason |
|---|---|---|
| Network gossip | Pending | Must wait for promotion window |
| Block body (on-chain) | Active | Already consensus-validated |
| Proposer’s own attestation | Pending | Prevents proposer weight advantage |
The proposer’s own attestation enters as pending (not active) deliberately. If it were immediately active, the proposer would gain an unfair weight advantage for their own block, a circular dependency where proposing a block gives you an extra vote toward making that block canonical.
Safe Target Selection
The safe target is a conservative head computed with a high weight threshold.
It constrains the target field in attestations, which feeds into
3SF-mini for justification and finalization decisions. Validators
still vote for the newest head they see (regular LMD-GHOST with min_score = 0)
in the head field. The safe target only affects which blocks can progress
toward finality. It is computed by running the same LMD-GHOST algorithm but with
a non-zero min_score in the filtering phase.
SAFE TARGET vs HEAD
────────────────────
Regular head (min_score = 0):
Follow heaviest branch, even with a slim margin
┌── B (3 votes) ← HEAD (3 > 2)
J ── A ──┤
└── C (2 votes)
Safe target (min_score = ⌈2V/3⌉):
Only follow branches with supermajority support
V = 5 validators, threshold = ⌈10/3⌉ = 4
┌── B (3 votes) ← Below threshold (3 < 4), pruned
J ── A ──┤
└── C (2 votes) ← Below threshold (2 < 4), pruned
Safe target = A (no children pass threshold)
This means the safe target lags behind the head. It only advances when a branch accumulates overwhelming support, making it resistant to temporary fluctuations:
Timeline of safe target vs head:
Slot: 10 11 12 13 14 15 16
Head: J A B D D E F
Safe: J J J A A A D
│
Safe target is always ────┘
at or behind the head
The safe target prevents 3SF-mini from finalizing unstable branches: without it, a slim-majority fork could reach justification and finalization before the network converges. By requiring supermajority support for the target, only branches with strong consensus can progress toward finality, even though validators’ head votes freely follow the newest chain tip.
Reorgs
A reorg (reorganization) occurs when the fork choice head switches from one branch to another. This happens when a competing branch accumulates more attestation weight than the current head’s branch.
REORG SCENARIO
──────────────
Before (head = D):
┌── B ── D ★ HEAD (weight 4)
J ── A ──┤
└── C ── E (weight 3)
New attestations arrive, 3 validators switch to E:
┌── B ── D (weight 2)
J ── A ──┤
└── C ── E ★ HEAD (weight 5) ← REORG!
The canonical chain changed from J─A─B─D to J─A─C─E
Blocks B and D are no longer canonical (but remain in the block tree).
Reorgs are normal during transient network conditions but should be rare in stable operation. They cannot cross a finalization boundary: once a block is finalized, it is permanently part of the canonical chain.
In ethlambda: Reorgs are detected by checking whether the old and new heads share a common prefix, and tracked via Prometheus metrics (
lean_fork_choice_reorgs_total).
LMD-GHOST Variants
LMD-GHOST is one of several variants that have been proposed and studied. Understanding the design space helps explain why LMD was chosen.
| Variant | Full Name | What Counts | Trade-off |
|---|---|---|---|
| IMD | Immediate Message Driven | All attestations ever | Maximizes data but creates unbounded storage and is vulnerable to long-range rewriting |
| LMD | Latest Message Driven | Only each validator’s most recent attestation | Good balance: one vote per validator, reflects current view, bounded storage |
| FMD | Fresh Message Driven | Only attestations from current/previous epoch | Prevents very old attestations from influencing fork choice, but validators who go offline lose influence immediately |
| RLMD | Recent Latest Message Driven | Latest attestation, but only if within N epochs | Parameterized compromise between LMD and FMD; tunable staleness threshold |
The Ethereum consensus mini-spec originally used IMD-GHOST but switched to LMD in November 2018 due to superior stability properties.
IMD: All attestations count LMD: Only latest counts
V0: slot 5 → head B V0: slot 5 → head B (overwritten)
V0: slot 8 → head C V0: slot 8 → head C ← active
V0: slot 11 → head E V0: slot 11 → head E ← active
V0 contributes 3 votes! V0 contributes 1 vote.
Validators who attest more Equal influence regardless
often have outsized influence. of attestation frequency.
ethlambda Implementation Reference
This section covers ethlambda-specific details: scheduling, Beacon Chain differences, source code locations, and performance.
Tick-Based Scheduling
ethlambda divides time into 4-second slots, each split into 4 intervals (1 second each). Fork choice operations are scheduled at specific intervals:
ONE SLOT (4 seconds)
┌──────────────┬──────────────┬──────────────┬──────────────┐
│ Interval 0 │ Interval 1 │ Interval 2 │ Interval 3 │
│ (t+0s) │ (t+1s) │ (t+2s) │ (t+3s) │
├──────────────┼──────────────┼──────────────┼──────────────┤
│ │ │ │ │
│ IF PROPOSER: │ NON-PROPOSER:│ update_safe │ accept_new │
│ accept new │ produce │ _target() │ _attestations│
│ attestations│ attestation │ │ () │
│ + propose │ │ (2/3 vote │ │
│ block │ │ threshold) │ update_head()│
│ │ │ │ │
│ update_head()│ │ │ │
│ │ │ │ │
└──────────────┴──────────────┴──────────────┴──────────────┘
◄─────────────── Slot N ──────────────────────────────────────►
Detailed sequence:
Interval 0 ─ Slot boundary
│
├── Am I the proposer for this slot?
│ ├── YES: promote new → known attestations
│ │ run fork choice → update_head()
│ │ build block using known attestations
│ │ publish block to network
│ └── NO: (wait for block from proposer)
│
Interval 1 ─ Attestation production
│
├── Non-proposers:
│ └── Create attestation with:
│ • head = current fork choice head (newest head)
│ • target = derived from safe_target (for 3SF-mini)
│ • source = latest_justified checkpoint
│ Publish attestation to gossipsub
│
Interval 2 ─ Safe target update
│
├── Recalculate safe_target using 2/3 supermajority threshold
│ └── Only blocks with ≥ ⌈2V/3⌉ attestation weight qualify
│ (V = total validators)
│
Interval 3 ─ End of slot
│
├── Promote new → known attestations
└── Run fork choice → update_head()
Differences from the Ethereum Beacon Chain
ethlambda is a lean consensus client with several simplifications compared to the Ethereum Beacon Chain:
| Aspect | ethlambda | Ethereum Beacon Chain |
|---|---|---|
| Vote weight | Equal: 1 vote per validator | Proportional to effective balance (up to 32 ETH) |
| Proposer boost | None | Yes: newly proposed blocks get temporary bonus weight |
| Equivocation handling | Not in fork choice | Equivocating validators’ weight excluded |
| Attestation frequency | Every slot | Once per epoch |
| Committee structure | All validators attest each slot | Validators split into per-slot committees |
| Slot duration | 4 seconds | 12 seconds |
No proposer boost. The Beacon Chain adds a “proposer boost”, a temporary weight bonus given to newly proposed blocks to prevent balancing attacks. ethlambda does not implement this. Instead, proposer fairness is handled through the two-stage attestation pipeline (the proposer’s own attestation enters as “new”, not “known”).
No balance weighting. In the Beacon Chain, a validator with 32 ETH of effective balance has more fork choice weight than one with 16 ETH. In ethlambda, every validator has exactly equal weight (1 vote = 1 unit of weight), simplifying the algorithm and analysis.
No equivocation discounting. The Beacon Chain’s fork choice detects validators who equivocate (attest to conflicting blocks in the same slot) and excludes their weight. This addresses the “nothing at stake” problem where validators can costlessly vote for multiple forks. ethlambda does not implement this in its fork choice.
Key Files
| File | Component |
|---|---|
crates/blockchain/fork_choice/src/lib.rs | Core LMD-GHOST algorithm (compute_lmd_ghost_head) |
crates/blockchain/src/store.rs | Store: head update, safe target, attestation promotion |
crates/blockchain/src/lib.rs | BlockChain actor: tick scheduling, interval dispatch |
crates/common/types/src/attestation.rs | AttestationData type (head, target, source, slot) |
crates/common/types/src/state.rs | Checkpoint (root + slot), State |
crates/storage/src/api/ | LiveChain table, StorageBackend trait |
Data Flow Summary
┌───────────┐ ┌──────────────┐ ┌───────────────┐
│ Gossipsub │────────▶│ New │──(promote)─▶│ Known │
│ (network) │ │ Attestations │ │ Attestations │
└───────────┘ └──────────────┘ └───────┬───────┘
│
┌───────────┐ │
│ LiveChain │──── { root → (slot, parent) } ───────────────┤
│ (index) │ │
└───────────┘ │
▼
┌─────────────────┐
┌───────────┐ │ compute_lmd_ │
│ Justified │──── start_root ───────────────▶│ ghost_head() │
│Checkpoint │ │ │
└───────────┘ └────────┬────────┘
│
┌──────┴──────┐
│ │
▼ ▼
┌──────────┐ ┌───────────┐
│ HEAD │ │ SAFE │
│ (min=0) │ │ TARGET │
└──────────┘ │ (min=2V/3)│
└───────────┘
Performance Characteristics
| Operation | Time Complexity | Description |
|---|---|---|
| Weight accumulation | O(A × D) | A = attestations, D = max chain depth from justified root |
| Greedy descent | O(D × B) | D = depth, B = max branching factor |
| Attestation promotion | O(V) | V = total validators |
| LiveChain lookup | O(B) | B = non-finalized blocks |
In practice with a small validator set and bounded non-finalized chain length,
all operations complete in sub-millisecond time. The // TODO: add proto-array implementation comment in the source indicates a future optimization path:
proto-array is an O(1) amortized fork choice algorithm used by most Beacon Chain
clients.
Metrics
We collect various metrics and serve them via a Prometheus-compatible HTTP endpoint at http://<http_address>:<metrics_port>/metrics (default: http://127.0.0.1:5054/metrics).
A ready-to-use Grafana + Prometheus monitoring stack with pre-configured leanMetrics dashboards is available in lean-quickstart.
The exposed metrics follow the leanMetrics specification, with some metrics not yet implemented. We have a full list of implemented metrics below, with a checkbox indicating whether each metric is currently supported or not.
Node Info Metrics
| Name | Type | Usage | Sample collection event | Labels | Supported |
|---|---|---|---|---|---|
lean_node_info | Gauge | Node information (always 1) | On node start | name, version | ✅ |
lean_node_start_time_seconds | Gauge | Start timestamp | On node start | ✅ |
PQ Signature Metrics
| Name | Type | Usage | Sample collection event | Labels | Buckets | Supported |
|---|---|---|---|---|---|---|
lean_pq_sig_attestation_signatures_total | Counter | Total number of individual attestation signatures | On each attestation signing | ✅ | ||
lean_pq_sig_attestation_signatures_valid_total | Counter | Total number of valid individual attestation signatures | On each attestation signature verification | ✅ | ||
lean_pq_sig_attestation_signatures_invalid_total | Counter | Total number of invalid individual attestation signatures | On each attestation signature verification | ✅ | ||
lean_pq_sig_attestation_signing_time_seconds | Histogram | Time taken to sign an attestation | On each attestation signing | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ | |
lean_pq_sig_attestation_verification_time_seconds | Histogram | Time taken to verify an attestation signature | On each attestation signature verification | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ | |
lean_pq_sig_aggregated_signatures_total | Counter | Total number of aggregated signatures | On aggregated signature production | ✅ | ||
lean_pq_sig_aggregated_signatures_valid_total | Counter | Total number of valid aggregated signatures | On aggregated signature verification | ✅ | ||
lean_pq_sig_aggregated_signatures_invalid_total | Counter | Total number of invalid aggregated signatures | On aggregated signature verification | ✅ | ||
lean_pq_sig_attestations_in_aggregated_signatures_total | Counter | Total number of attestations included into aggregated signatures | On aggregated signature production | ✅ | ||
lean_pq_sig_aggregated_signatures_building_time_seconds | Histogram | Time taken to build an aggregated attestation signature | On aggregated signature production | 0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4 | ✅ | |
lean_pq_sig_aggregated_signatures_verification_time_seconds | Histogram | Time taken to verify an aggregated attestation signature | On aggregated signature verification | 0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4 | ✅ |
Fork-Choice Metrics
| Name | Type | Usage | Sample collection event | Labels | Buckets | Supported |
|---|---|---|---|---|---|---|
lean_head_slot | Gauge | Latest slot of the lean chain | On get fork choice head | ✅ | ||
lean_current_slot | Gauge | Current slot of the lean chain | On scrape | ✅(*) | ||
lean_safe_target_slot | Gauge | Safe target slot | On safe target update | ✅ | ||
lean_fork_choice_block_processing_time_seconds | Histogram | Time taken to process block | On fork choice process block | 0.005, 0.01, 0.025, 0.05, 0.1, 1, 1.25, 1.5, 2, 4 | ✅ | |
lean_attestations_valid_total | Counter | Total number of valid attestations | On validate attestation | ✅ | ||
lean_attestations_invalid_total | Counter | Total number of invalid attestations | On validate attestation | ✅ | ||
lean_attestation_validation_time_seconds | Histogram | Time taken to validate attestation | On validate attestation | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ | |
lean_fork_choice_reorgs_total | Counter | Total number of fork choice reorgs | On fork choice reorg | ✅ | ||
lean_fork_choice_reorg_depth | Histogram | Depth of fork choice reorgs (in blocks) | On fork choice reorg | 1, 2, 3, 5, 7, 10, 20, 30, 50, 100 | ✅ | |
lean_gossip_signatures | Gauge | Number of gossip signatures in fork-choice store | On gossip signatures update | ✅ | ||
lean_latest_new_aggregated_payloads | Gauge | Number of new aggregated payload items | On latest_new_aggregated_payloads update | ✅ | ||
lean_latest_known_aggregated_payloads | Gauge | Number of known aggregated payload items | On latest_known_aggregated_payloads update | ✅ | ||
lean_committee_signatures_aggregation_time_seconds | Histogram | Time taken to aggregate committee signatures | On committee signatures aggregation | 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1 | ✅ |
State Transition Metrics
| Name | Type | Usage | Sample collection event | Labels | Buckets | Supported |
|---|---|---|---|---|---|---|
lean_latest_justified_slot | Gauge | Latest justified slot | On state transition | ✅ | ||
lean_latest_finalized_slot | Gauge | Latest finalized slot | On state transition | ✅ | ||
lean_finalizations_total | Counter | Total number of finalization attempts | On finalization attempt | result=success,error | ✅ | |
lean_state_transition_time_seconds | Histogram | Time to process state transition | On state transition | 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 2.5, 3, 4 | ✅ | |
lean_state_transition_slots_processed_total | Counter | Total number of processed slots | On state transition process slots | ✅ | ||
lean_state_transition_slots_processing_time_seconds | Histogram | Time taken to process slots | On state transition process slots | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ | |
lean_state_transition_block_processing_time_seconds | Histogram | Time taken to process block | On state transition process block | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ | |
lean_state_transition_attestations_processed_total | Counter | Total number of processed attestations | On state transition process attestations | ✅ | ||
lean_state_transition_attestations_processing_time_seconds | Histogram | Time taken to process attestations | On state transition process attestations | 0.005, 0.01, 0.025, 0.05, 0.1, 1 | ✅ |
Validator Metrics
| Name | Type | Usage | Sample collection event | Labels | Buckets | Supported |
|---|---|---|---|---|---|---|
lean_validators_count | Gauge | Number of validators managed by a node | On scrape | ✅(*) | ||
lean_is_aggregator | Gauge | Validator’s is_aggregator status. True=1, False=0 | On node start | ✅ | ||
lean_attestations_production_time_seconds | Histogram | Time taken to produce attestation | On attestation production | 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1 | ✅ |
Network Metrics
| Name | Type | Usage | Sample collection event | Labels | Supported |
|---|---|---|---|---|---|
lean_attestation_committee_count | Gauge | Number of attestation committees | On node start | ✅ | |
lean_attestation_committee_subnet | Gauge | Node’s attestation committee subnet | On node start | ✅ | |
lean_connected_peers | Gauge | Number of connected peers | On scrape | client=ethlambda,grandine,lantern,lighthouse,qlean,ream,zeam | ✅(*) |
lean_peer_connection_events_total | Counter | Total number of peer connection events | On peer connection | direction=inbound,outbound result=success,timeout,error | ✅ |
lean_peer_disconnection_events_total | Counter | Total number of peer disconnection events | On peer disconnection | direction=inbound,outbound reason=timeout,remote_close,local_close,error | ✅ |
✅(*) Partial support: These metrics are implemented but not collected “on scrape” as the spec requires. They are updated on specific events (e.g., on tick, on block processing) rather than being computed fresh on each Prometheus scrape.
Troubleshooting
Docker Desktop on MacOS
lean-quickstart uses the host network mode for Docker containers, which is a problem on MacOS. To work around this, enable the “Enable host networking” option in Docker Desktop settings under Resources > Network.
Checkpoint Sync
Overview
Checkpoint sync allows a new consensus node to skip replaying the entire chain from genesis. Instead, it downloads a recent finalized state from a running peer and starts from there. This mitigates long-range attacks by starting from a recent trusted checkpoint.
Usage
Checkpoint sync still requires a full network config directory (--custom-network-config-dir). The genesis config is needed to verify the downloaded state: checkpoint sync only replaces the starting state, not node configuration.
Pass the --checkpoint-sync-url flag when starting ethlambda:
ethlambda \
--checkpoint-sync-url <URL> \
--custom-network-config-dir ./network-config \
--node-key ./node.key \
--node-id ethlambda_0
Where <URL> is the address of a checkpoint source (see Checkpoint Sources below).
When --checkpoint-sync-url is omitted, the node initializes from genesis.
Checkpoint Sources
Direct peer
Any running node that serves the finalized state as SSZ can be used as a checkpoint source, not just ethlambda. For ethlambda nodes, the endpoint is /lean/v0/states/finalized.
This is the simplest option, with no additional infrastructure needed. The trade-off is that you trust a single peer to provide a correct finalized state.
Leanpoint
Leanpoint is a dedicated checkpoint sync provider. It polls multiple nodes and only serves state when 50%+ agree on finality, adding a layer of consensus validation.
This is the recommended option for production deployments since it reduces trust in any single peer.
How It Works
-
Fetch and verify: The node sends an HTTP GET to the provided URL requesting the SSZ-encoded finalized state. Once downloaded, the state is decoded and verified against the local genesis config (see Verification Checks below).
Timeouts:
- Connect: 15 seconds (fail fast if peer is unreachable)
- Read: 15 seconds of inactivity that resets on each successful read, so large states can download as long as data keeps flowing
-
Initialize: The node stores the block header and the full state from the checkpoint. No block body is stored since it isn’t available from the checkpoint. The node does not need the anchor block body to participate from this point forward.
Failure and success
If any step fails (network error, decoding error, verification failure), the node logs the error and exits. There is no automatic retry; restart the node to try again. The database is not modified until verification succeeds, so a failed checkpoint sync leaves the data directory clean.
After successful initialization, the node starts normally: it connects to the P2P network and begins participating from the checkpoint slot.
If the data directory (./data) already contains state from a previous run, checkpoint sync writes the new anchor state on top without clearing existing data. For a clean checkpoint sync, remove the data directory first.
Verification Checks
All checks are performed before the state is accepted:
| Check | What it catches |
|---|---|
| Slot > 0 | Checkpoint state cannot be genesis (slot 0) |
| Validators non-empty | State must contain validators |
| Genesis time matches | Wrong network or misconfigured peer |
| Validator count matches | Validator set size differs from genesis config |
| Sequential validator indices | Indices must be 0, 1, 2, … in order |
| Validator pubkeys match | Validator identity differs from genesis config |
| Finalized slot <= state slot | Finalized checkpoint cannot be in the future |
| Justified slot >= finalized slot | Justified must be at or after finalized |
| Same-slot checkpoints have matching roots | If justified and finalized are at the same slot, they must agree on the root |
| Block header slot <= state slot | Block header cannot be ahead of the state |
| Block header root matches finalized | If header is at finalized slot, its root must match the finalized root |
| Block header root matches justified | If header is at justified slot, its root must match the justified root |
HTTP errors and SSZ decoding failures are caught before verification runs.
Security Considerations
Trust model
Checkpoint sync operates under a weak subjectivity assumption. In proof of work, any node can objectively determine the canonical chain by verifying the most cumulative work. Proof of stake doesn’t have this property: validators can costlessly sign multiple forks, so a node that wasn’t online to observe the chain in real time cannot distinguish the real chain from a fabricated one using protocol rules alone.
Weak subjectivity resolves this: a new node obtains a recent trusted state through a social channel (a peer, a checkpoint provider, a block explorer) and starts from there. Nodes that are always online are unaffected because they continuously track the chain and don’t need external trust.
What you are trusting:
- The checkpoint source is honest about which state is finalized
- The state hasn’t been crafted to put you on a fork that diverged within the weak subjectivity period
What verification does protect against:
- Wrong network (genesis time mismatch)
- Wrong validator set (pubkey or count mismatch)
- Structurally invalid states (impossible slot orderings, inconsistent checkpoints)
- Corrupted data (SSZ decode failures)
What verification does not protect against:
- A checkpoint source that serves a structurally valid state on a minority fork. It will pass all checks but put you on the wrong chain. This is why the choice of checkpoint source matters.
Fork Choice Visualization
A browser-based real-time visualization of the LMD GHOST fork choice tree, served from the existing RPC server with no additional dependencies.
Endpoints
| Endpoint | Description |
|---|---|
GET /lean/v0/fork_choice/ui | Interactive D3.js visualization page |
GET /lean/v0/fork_choice | JSON snapshot of the fork choice tree |
Both endpoints are served on the API port (--api-port, default 5052).
Quick Start
Local devnet
make run-devnet
The local devnet runs 3 ethlambda nodes with metrics ports 8085, 8086, and 8087. Open any of them:
- http://localhost:8085/lean/v0/fork_choice/ui
- http://localhost:8086/lean/v0/fork_choice/ui
- http://localhost:8087/lean/v0/fork_choice/ui
Standalone node
cargo run --release -- \
--custom-network-config-dir ./config \
--node-key ./keys/node.key \
--node-id 0 \
--api-port 5052
Then open http://localhost:5052/lean/v0/fork_choice/ui.
Visualization Guide
Color coding
| Color | Meaning |
|---|---|
| Green | Finalized block |
| Blue | Justified block |
| Yellow | Safe target block |
| Orange | Current head |
| Gray | Default (no special status) |
Layout
- Y axis: slot number (time flows downward)
- X axis: fork spreading — branches appear when competing chains exist
- Circle size: scaled by
weight / validator_count— larger circles have more attestation support
Interactive features
- Tooltips: hover any block to see root hash, slot, proposer index, and weight
- Auto-polling: the page fetches fresh data every 2 seconds
- Auto-scroll: the view follows the head as the chain progresses
What to look for
- Single vertical chain: healthy consensus, no forks
- Horizontal branching: competing chains — check attestation weights to see which branch validators prefer
- Color transitions: blocks turning green as finalization advances
- Stalled finalization: if justified/finalized slots stop advancing, check validator attestation activity
JSON API
curl -s http://localhost:5052/lean/v0/fork_choice | jq .
Response schema:
{
"nodes": [
{
"root": "0x...",
"slot": 42,
"parent_root": "0x...",
"proposer_index": 3,
"weight": 5
}
],
"head": "0x...",
"justified": { "root": "0x...", "slot": 10 },
"finalized": { "root": "0x...", "slot": 5 },
"safe_target": "0x...",
"validator_count": 8
}
| Field | Description |
|---|---|
nodes | All blocks in the live chain (from finalized slot onward) |
nodes[].weight | Number of latest-message attestations whose target is this block or a descendant |
head | Current fork choice head root |
justified | Latest justified checkpoint |
finalized | Latest finalized checkpoint |
safe_target | Block root selected with a 2/3 validator threshold |
validator_count | Total validators in the head state |