How this works under the hood
In this section we go over how a few things in the prove
and verify
functions are implemented. If you just need to use the prover, then you probably don't need to read this. If you're going through the code to try to understand it, read on.
We will once again use the fibonacci example as an ilustration. Recall from the recap
that the main steps for the prover and verifier are the following:
Prover side
- Compute the trace polynomial
t
by interpolating the trace column over a set of -th roots of unity . - Compute the boundary polynomial
B
. - Compute the transition constraint polynomial
C
. - Construct the composition polynomial
H
fromB
andC
. - Sample an out of domain point
z
and provide the evaluation and all the necessary trace evaluations to reconstruct it. In the fibonacci case, these are , , and . - Sample a domain point
x_0
and provide the evaluation and . - Construct the deep composition polynomial
Deep(x)
fromH
,t
, and the evaluations from the item above. - Do
FRI
onDeep(x)
and provide the resulting FRI commitment to the verifier. - Provide the merkle root of
t
and the merkle proof of .
Verifier side
- Take the evaluation along with the trace evaluations the prover provided.
- Reconstruct the evaluations and from the trace evaluations. Check that the claimed evaluation the prover gave us actually satisfies
- Take the evaluations and .
- Check that the claimed evaluation the prover gave us actually satisfies
- Take the provided
FRI
commitment and check that it verifies. - Using the merkle root and the merkle proof the prover provided, check that belongs to the trace.
Following along the code in the prove
and verify
functions, most of it maps pretty well to the steps above. The main things that are not immediately clear are:
- How we take the constraints defined in the
AIR
through thecompute_transition
method and map them to transition constraint polynomials. - How we then construct
H
from them and the boundary constraint polynomials. - What the composition polynomial even/odd decomposition is.
- What an
ood
frame is. - What the transcript is.
Reconstructing the transition constraint polynomials
This is possibly the most complex part of the code, so what follows is a long explanation for it.
In our fibonacci example, after obtaining the trace polynomial t
by interpolating, the transition constraint polynomial is
On our prove
code, if someone passes us a fibonacci AIR
like the one we showed above used in one of our tests, we somehow need to construct . However, what we are given is not a polynomial, but rather this method
#![allow(unused)] fn main() { fn compute_transition( &self, frame: &air::frame::Frame<Self::Field>, ) -> Vec<FieldElement<Self::Field>> { let first_row = frame.get_row(0); let second_row = frame.get_row(1); let third_row = frame.get_row(2); vec![third_row[0] - second_row[0] - first_row[0]] } }
So how do we get to from this? The answer is interpolation. What the method above is doing is the following: if you pass it a frame that looks like this
for any given point , it will return the value
which is the numerator in . Using the transition_exemptions
field we defined in our AIR
, we can also compute evaluations in the denominator, i.e. the zerofier evaluations. This is done under the hood by the transition_divisors()
method.
The above means that even though we don't explicitly have the polynomial , we can evaluate it on points given an appropriate frame. If we can evaluate it on enough points, we can then interpolate them to recover . This is exactly how we construct both transition constraint polynomials and subsequently the composition polynomial H
.
The job of evaluating H
on enough points so we can then interpolate it is done by the ConstraintEvaluator
struct. You'll notice prove
does the following
#![allow(unused)] fn main() { let constraint_evaluations = evaluator.evaluate( &lde_trace, &lde_roots_of_unity_coset, &alpha_and_beta_transition_coefficients, &alpha_and_beta_boundary_coefficients, ); }
This function call will return the evaluations of the boundary terms
and constraint terms
for every . The constraint_evaluations
value returned is a ConstraintEvaluationTable
struct, which is nothing more than a big list of evaluations of each polynomial required to construct H
.
With this in hand, we just call
#![allow(unused)] fn main() { let composition_poly = constraint_evaluations.compute_composition_poly(& lde_roots_of_unity_coset); }
which simply interpolates the sum of all evaluations to obtain H
.
Let's go into more detail on how the evaluate
method reconstructs in our fibonacci example. It receives the lde_trace
as an argument, which is this:
where is the primitive root of unity used for the LDE
, that is, satisfies . We need to recover , a polynomial whose degree can't be more than 's. Because was built by interpolating 8
points (the trace), we know we can recover by interpolating it on 16 points. We choose these points to be the LDE
roots of unity
Remember that to evaluate on these points, all we need are the evaluations of the polynomial
as the zerofier ones we can compute easily. These become:
If we remember that , this is
and we can compute each evaluation here by calling compute_transition
on the appropriate frame built from the lde_trace
. Specifically, for the first evaluation we can build the frame:
Calling compute_transition
on this frame gives us the first evaluation. We can get the rest in a similar fashion, which is what this piece of code in the evaluate
method does:
#![allow(unused)] fn main() { for (i, d) in lde_domain.iter().enumerate() { let frame = Frame::read_from_trace( lde_trace, i, blowup_factor, &self.air.context().transition_offsets, ) let mut evaluations = self.air.compute_transition(&frame); ... } }
Each iteration builds a frame as above and computes one of the evaluations needed. The rest of the code just adds the zerofier evaluations, along with the alphas and betas. It then also computes boundary polynomial evaluations by explicitly constructing them.
Verifier
The verifier employs the same trick to reconstruct the evaluations on the out of domain point for the consistency check.
Even/odd decomposition for H
At the end of the recap we talked about how in our code we don't actually commit to H
, but rather an even/odd decomposition for it. These are two polynomials H_1
and H_2
that satisfy
This all happens on this piece of code
#![allow(unused)] fn main() { let composition_poly = constraint_evaluations.compute_composition_poly(&lde_roots_of_unity_coset); let (composition_poly_even, composition_poly_odd) = composition_poly.even_odd_decomposition(); // Evaluate H_1 and H_2 in z^2. let composition_poly_evaluations = vec![ composition_poly_even.evaluate(&z_squared), composition_poly_odd.evaluate(&z_squared), ]; }
After this, we don't really use H
anymore, but rather H_1
and H_2
. There's not that much to say other than that.
Out of Domain Frame
As part of the consistency check, the prover needs to provide evaluations of the trace polynomials in all the points needed by the verifier to check that H
was constructed correctly. In the fibonacci example, these are , , and . In code, the prover passes these evaluations as a Frame
, which we call the out of domain (ood
) frame.
The reason we do this is simple: with the frame in hand, the verifier can reconstruct the evaluations of the constraint polynomials by calling the compute_transition
method on the ood frame and then adding the alphas, betas, and so on, just like we explained in the section above.
Transcript
Throughout the protocol, there are a number of times where the verifier randomly samples some values that the prover needs to use (think of the alphas and betas used when constructing H
). Because we don't actually have an interaction between prover and verifier, we emulate it by using a hash function, which we assume is a source of randomness the prover can't control.
The job of providing these samples for both prover and verifier is done by the Transcript
struct, which you can think of as a stateful rng
; whenever you call challenge()
on a transcript you get a random value and the internal state gets mutated, so the next time you call challenge()
you get a different one. You can also call append
on it to mutate its internal state yourself. This is done a number of times throughout the protocol to keep the prover honest so it can't predict or manipulate the outcome of challenge()
.
Notice that to sample the same values, both prover and verifier need to call challenge
and append
in the same order (and with the same values in the case of append
) and the same number of times.
The idea explained above is called the Fiat-Shamir heuristic or just Fiat-Shamir
, and is more generally used throughout proving systems to remove interaction between prover and verifier. Though the concept is very simple, getting it right so the prover can't cheat is not, but we won't go into that here.
Proof
The generated proof has got all the information needed for the verifier to verify it:
- Trace length: The number of rows of the trace table, needed to know the max degree of the polynomials that appear in the system.
- LDE trace commitments.
- DEEP composition polynomial out of domain even and odd evaluations.
- DEEP composition polynomial root.
- FRI layers merkle roots.
- FRI last layer value.
- Query list.
- DEEP composition poly openings.
- Nonce: Proof of work setting used to generate the proof.
Special considerations
FFT evaluation and interpolation
When evaluating or interpolating a polynomial, if the input (be it coefficients or evaluations) size isn't a power of two then the FFT API will extend it with zero padding until this requirement is met. This is because the library currently only uses a radix-2 FFT algorithm.
Also, right now FFT only supports inputs with a size up to elements.
Other
Why use roots of unity?
Whenever we interpolate or evaluate trace, boundary and constraint polynomials, we use some -th roots of unity. There are a few reasons for this:
-
Using roots of unity means we can use the Fast Fourier Transform and its inverse to evaluate and interpolate polynomials. This method is much faster than the naive Lagrange interpolation one. Since a huge part of the STARK protocol involves both evaluating and interpolating, this is a huge performance improvement.
-
When computing boundary and constraint polynomials, we divide them by their
zerofiers
, polynomials that vanish on a few points (the trace elements where the constraints do not hold). These polynomials take the formwhere the are the points where we want it to vanish.
When implementing this, evaluating this polynomial can be very expensive as it involves a huge product. However, if we are using roots of unity, we can use the following trick. The vanishing polynomial for all the roots of unity is
Instead of expressing the zerofier as a product of the places where it should vanish, we express it as the vanishing polynomial above divided by the
exemptions
polynomial; the polynomial whose roots are the places where constraints don't need to hold.where the are now the points where we don't want it to vanish. This
exemptions
polynomial in the denominator is usually much smaller, and because the vanishing polynomial in the numerator is only two terms, evaluating it is really fast.
What is a primitive root of unity?
The -th roots of unity are the numbers that satisfy
There are such numbers, because they are the roots of the polynomial . The set of -th roots of unity always has a generator
, a root that can be used to obtain every other root of unity by exponentiating. What this means is that the set of -th roots of unity is
Any such generator g
is called a primitive root of unity. It's called primitive because it allows us to recover any other root.
Here are a few important things to keep in mind, some of which we use throughout our implementation:
-
There are always several primitive roots. If is primitive, then any power with coprime with is also primitive. As an example, if is a primitive -th root of unity, then is also primitive.
-
We generally will not care about which primitive root we choose; what we do care about is being consistent. We should always choose the same one throughout our code, otherwise computations will go wrong.
-
Because , the powers of wrap around. This means
and so on.
-
If is a primitive -th root of unity, then is a primitive -th root of unity. In general, if is a primitive -th primitive root of unity, then is a primitive -th root of unity.
Why use Cosets?
When we perform FRI
on the DEEP
composition polynomial, the low degree extension we use is not actually over a set of higher roots of unity than the ones used for the trace, but rather a coset of it. A coset is simply a set of numbers all multiplied by the same element. We call said element the offset
. In our case, a coset of the -th roots of unity with primitive root and offset h
is the set
So why not just do the LDE without the offset? The problem is in how we construct and evaluate the composition polynomial H
. Let's say our trace polynomial was interpolated over the -th roots of unity with primitive root , and we are doing the LDE over the -th roots of unity with primitive root , so (i.e. the blowup factor is 2
).
Recall that H
is a sum of terms that include boundary and transition constraint polynomials, and each one of them includes a division by a zerofier
; a polynomial that vanishes on some roots of unity . This is because the zerofier is what tells us which rows of the trace our constraint should apply on.
When doing FRI
, we have to provide evaluations over the LDE domain we are using. If we don't include the offset, our domain is
Note that, because , some of the elements on this set (actually, half of them) are powers of . If while doing FRI
we evaluate H
on them, the zerofier could vanish and we'd be dividing by zero. We introduce the offset to make sure this can't happen.
NOTE: a careful reader might note that we can actually evaluate H
on the elements , since on a valid trace the zerofiers will actually divide the polynomials on their numerator. The problem still remains, however, because of performance. We don't want to do polynomial division if we don't need to, it's much cheaper to just evaluate numerator and denominator and then divide. Of course, this only works if the denominator doesn't vanish; hence, cosets.