Noir
For users of Noir and Nargo.
This tutorial corresponds to the circuit type v0.17.0
version of Noir with Aztec's barretenberg proving scheme.
Sindri's proving API has been tested against other versions of Noir, with moderate compatibility at v0.10.0
and above.
The quickest method to verify compatibility with your circuit is to attempt compilation by mimicking the API tutorial.
Introduction​
The promise of ZKML is that we can augment a machine learning model with the guarantee that it was (1) faithfully used to make a prediction and/or (2) ethically trained. The ZKML community has produced swift advances to make the former, so-called verifiable inference, feasable. However, the latter, so-called verifiable training, is generally a much greater lift. In this tutorial, we will demonstrate a simple example of verifiable training where an unsupervised machine learning algorithm is used to find the most influential nodes in a network.
PageRank was developed by the co-founders of Google in order to solve the following problem: given a graph made up of websites and the hyperlinks between them, which sites are of most central appeal? Because the results of this algorithm are so pivotal to a website's success, it has been heavily scrutinized for vulnerabilities. From these vulnerabilities, new strategies for dishonest link manipulation have emerged.
This is exactly the paradigm where verifiable training provides a solution: by bridging a trust-gap. Consider the following scenario; you are a participant in a network and want the results of PageRank but do not have the compute power or all of the requisite data to produce this ranking. A third party could provide this ranking for you, but perhaps you do not trust them because they are also participants within the network and would benefit by artificially raising the importance of their own nodes. Codifying PageRank in a ZK circuit will allow the third party to produce the ranking along with a proof of its integrity.
The complete circuit code as well as the Sindri API script be found here along with other example projects in our sindri-resources repository. You can also check out the Noir documentation for more general background information on using Noir.
PageRank Algorithm​
PageRank is modelled after a person randomly following hyperlinks as they search for their desired content. At each iteration of random surfing, a person will stay on a page with probability and will click on a link with probability . All of the outbound links from the current page will recieve equal "boosts" from the score of the current page. After several iterations of this message passing, the most central nodes should have received the greatest boosts.
Give it a Spin
You can visualize the importance of nodes after PageRank iterations via the button below. Notice that convergence happens relatively quickly for each new graph.
More formally, we will implement PageRank via the following concepts:
- Instead of expressing the network as a series of nodes and edges, we will use a normalized version of its adjacency matrix so that iterations of pagerank are accomplished by matrix multiplication.
- The running tally of PageRank scores is stored as a probability vector with length equal to the number of nodes. Its initialization is uniform, meaning every node in the network has the same score in the beginning.
- As a final post-processing step, we will sort the probability vector so that the output of our circuit is a list of nodes ordered by PageRank scores (rather than revealing the raw scores).
// Set some hyperparameters and initialize structures
damping_factor = .85
num_iterations = 10
probabilities = [1/N, 1/N, ..., 1/N] // N := number of nodes
M = adjacency_matrix(input_edgelist)
M = normalize_column_sums(M) // special steps are taken for isolated nodes
M = d*M + (1-d)/N // i.e. apply formula to every matrix entry
for i in 0..num_iterations:
probabilities = M @ probabilities // matmult
return argsort(probabilities)
Circuit​
Similar to our Halo2 API Tutorial, the principal issue with implementing the algorithm described in the previous section is that non-integer values are involved whereas proving must take place over a finite field.
Several excellent Noir libraries exist for quantized and floating point arithmetic, but for our purposes we do not need to implement arbitrary operations and can save a large number of constraints by performing our own quantization (which we never have to dequantize.)
In particular, we scale up all of our integer arithmetic by a factor of SCALE = 20000
, so that overflow issues due to quantized field arithmetic performed inside the circuit are virtually nonexistent.
This choice of scale factor would necessarily increase if we decided to increase our network size from its current configuration with NUM_NODES = 20
.
With the current choices of global variables, the result of compiling the circuit code indicates that our circuit is already fairly sizable:
+----------+------------------------+--------------+----------------------+
| Package | Language | ACIR Opcodes | Backend Circuit Size |
+----------+------------------------+--------------+----------------------+
| pagerank | PLONKCSat { width: 3 } | 344141 | 349595 |
+----------+------------------------+--------------+----------------------+
Our circuit code prioritizes understandability over (constraint) efficiency. We leave it as an exercise for budding Noir developers to implement a better sorting algorithm or sparse matrix operations.
Circuit Configuration​
Our circuit directory looks very similar to the standard Noir project.
During local development, you may generate files such as Prover.toml
, Verifier.toml
, targets/*.json
, etc. from interacting with Nargo.
It is okay to include those with your upload, though they aren't used by Sindri during circuit compilation.
📦circuits
┣ 📂src
┃ ┗ 📜main.nr
┣ 📜Nargo.toml
┗ 📜sindri.json
- sindri.json
- Nargo.toml
- Circuit Definition
For any circuit, you always need a sindri.json
in the top level of your upload.
There are a few key pieces of information here that are required for Noir circuits:
- The
circuitType
tells the Sindri API circuit compiler what framework your uploaded circuit is written in. - The
provingScheme
specifies that our proof Aztec's barretenberg.
The Nargo.toml
configuration file gives our circuit the package name pagerank
.
Later in this tutorial, we will assign the same name to our circuit but that is not a Sindri API requirement.
We also specify that our circuit is written according to Noir v0.17.0
(see the note at the top of this page for compatibility information).
Finally, our package does not require any external Noir dependencies although those are allowed when using Sindri's API.
The main function of our circuit follows the same steps we provided in the psuedocode block. We omit the auxilliary function definitions here, but full details are available by following the GitHub link to our sindri-resources repo.
Sindri API​
We'll walk through the steps to upload the packaged circuit to Sindri, generate proofs, and verify those proofs locally with Nargo v0.17.0
.
The following codeblock will pull the necessary code, install any dependencies, and run the proving script.
You will need to replace <my-key>
with your Sindri API key to authenticate all requests to the API (see #api-authentication).
# Clone the sindri-resources repository and move into the `pagerank` tutorial.
git clone https://github.com/Sindri-Labs/sindri-resources.git
cd sindri-resources/circuit_tutorials/noir/pagerank/
# Install dependencies from `package.json`
npm install
# Replace <my-key> with your own Sindri API key
SINDRI_API_KEY=<my-key> node compile.mjs
The first chunk of the script simply imports the dependencies and configures axios
to authenticate with and use the Sindri API.
Upload & Compile​
To upload and compile our circuit, we first need to create a gzipped tar archive of our circuit directory and upload it to the /circuit/{circuit_id}/create
endpoint as multipart form data.
Our sindri.json
file will be included in the tarball and indicates that the circuitType
is noir
and that pagerank
is the name of the circuit.
Uploading this tarball will automatically trigger circuit compilation.
Output - Create Circuit
Circuit ID: 10cbd235-5745-4907-9ef1-0af9dd57115c
The returned circuit ID can then be used to perform additional operations related to the circuit such as fetching its current state or generating proofs once it has compiled successfully.
First, we will poll the /circuit/${circuitId}/detail
endpoint until the circuit status
is Ready
.
Output - Circuit Polling
Polling succeeded after 12.9 seconds.
Output - Circuit Detail
{
"circuit_id": "10cbd235-5745-4907-9ef1-0af9dd57115c",
"circuit_name": "pagerank",
"date_created": "2023-11-16T15:19:43.587Z",
"status": "Ready",
"compute_time": "P0DT00H00M10.588363S",
"compute_times": {
"total": 11.73930806107819,
"queued": 3.317454,
"clean_up": 0.0012714648619294167,
"nargo_checks": 10.459533462300897,
"extract_code_tarfile": 1.278257817029953
},
"file_sizes": {
"total": 1908348,
"total_gb": 0.001908348,
"total_mb": 1.908348,
"code.tar.gz": 1908348
},
"metadata": { "api_version": "v1.5.9", "prover_backend_version": "v0.3.0" },
"worker_hardware": {
"CPU": {
"arch": "X86_64",
"bits": 64,
"count": 28,
"model": 1,
"family": 25,
"stepping": 1,
"brand_raw": "AMD EPYC 7443P 24-Core Processor",
"l2_cache_size": 12582912,
"l3_cache_size": 524288,
"vendor_id_raw": "AuthenticAMD",
"python_version": "3.10.12.final.0 (64 bit)",
"arch_string_raw": "x86_64",
"hz_actual_friendly": "2.3953 GHz",
"l1_data_cache_size": 786432,
"l2_cache_line_size": 512,
"cpuinfo_version_string": "9.0.0",
"hz_advertised_friendly": "2.3953 GHz",
"l2_cache_associativity": 6,
"l1_instruction_cache_size": 786432
},
"GPU": { "gpus": [], "num_gpus": 0 },
"RAM": { "virtual_memory_total_gb": 64 }
},
"verification_key": null,
"error": null,
"circuit_type": "Noir",
"nargo_package_name": "pagerank",
"acir_opcodes": 344141,
"circuit_size": 349595
}
After circuit compilation has completed, we're ready to generate proofs for arbitrary circuit inputs.
Prove​
We can generate new proofs using the /circuit/${circuitId}/prove
endpoint, and then follow a similar polling pattern with /proof/${proofId}/detail
endpoint until status
is Ready
.
Output - Polling
Proof ID: 5222ef98-b6b7-4433-8874-6322d72cb989
Polling succeeded after 17.4 seconds.
Output - Proof
{
"proof": "2d2ed78f8f57aec311d74b2c4d86bb25a91f919c81a324c8aa86b651d94e7a821e2bf1d349a13f03699f16c706a4731bdede81a4ca9224b489e5b04ab43ea15a1b47bdcaa9f3823fc9e2fedeba9a7d35b84132ca85e16471499d990dc71f2b3a05667ed70a8fb419158b875d18fefba5e10a513b75ee9978f0a08b3bf037bc4310eb15039b0866627c481b372873fbe8c60828d03fb762b0e98054fea5853d5a0cead05c0e290b475284eb57bacca6f2c1cea7b15696286ea62e47c58035312a219b6f13aa93188337b21547bbf8c4c1cd277387662afa10ebe3d164bef229ef033548602502ef2d13418d54233aeb2191a9e96ce49820ea4bdb7388a468457a036baa70063421737dbc6da656e6b47888f4ed0240f8ecf52ca427c315d084ab1b02347ead3fcb04a750e17c516eb8e587add7e9d8b69bfb99fba41ea7b3ee571c238c2ad858aa81c29023219e4e8f5467220d4b2707bcaa51035278117e197326b63ccd8999474e1348e493a722efbb1e7f183b95369f40d9e391f5c76a2a7f12b583c0bc58f35c8e13251111a524bdc598b74d3a182de7782266beab5a128b2abb6d6c62e933914fa0565224987c447997efa3ab383ceff5c231d782933089205620b3c82e2e6a269ae64c01ea840b8b53026d3a982d025ad9a72703be45a2249388b998385fb6431cad68c5130bfe031cd7f3c052dccd64505bb302b7b3202093722dd7ffa4ca6a6717ac1b2c623004ffb567729e0ffd0233b5f29695c59c2b9912c71cb283fc39ed6244b7926e1787b60974d9ef3bb58bbd45d25e8d118409b1a651564de193ccf38c2b248d52399f654e0c6bce9e608e4c444059b89cc11aab2c053207eb633bc1ac4e4ba1ea3c9d0253d9bd2d43831abff3ab88c55cb108fd6374c212187268f1f6faf583b5222f55d9c7310f65628e5815197d1fce500ab3b1ed9d54fa9e43aaf61570d5bef233734aa1df9f2ad95d0e65fdba5ef22a18d5246f7fc05c58f52e198f0ed996dee3346744005d63db79ef7843846d74a707141daafb618703bb1e077c674f4ef59efe93f9256bfa2a4ad4dec95e93553e2e898bb6fba66797c46ecb6e78159d86380504b9e51c624340b8583080b4853426013ff069b600e24e398a487c3f08a3e47b068c621047d66d0c51c3260dcd05107daadc649b12372bf10df96fc78b34c447ac201725e1a2032bb169bb3c0df229c3d62f94401711bf1378af293a0e74d4c6fa964c2cf0608e41e48f6d34b2931d159b858dda965cbaddb33fa3e15ba936e7ffeaabccd367e96200f2a0998ec523989a6e4051be3400a5a481a472a49377eda1de47f75c3791fb3b258851ec1c08ce4ce064ca8bdb6be4f9bfd3a33c1dbfa7e7311a6e52d1d7029e1b87bbff3313345c6d8d9c5ee45d6e6745fb620cab0ede9c61906548b40f01827279d5323109fde0bb00a935ee94045b168d059937cd8bfdd04ed3882549e40e55521f5f37255f25391d2f01eff545558f81545f53efd0799fc82485ed86ad6944476f903923f9e2fb448e7f630bbc5dcb498c747f6ea8937b7d1074a2a0de98b3d42f54a507a30164c49283e0b15da1c143303abc4325c82a688c05b69b9a52e1c237f0de10c67179e7f833ffaeb83fd891b10ee24ced0100c7baab378f8433a189e7e7352b6bfa2281d2516bc3564ac3b04a09b82db7d689f2898b5731550f1f21662fd2053b41f91d683de0dfdfa353a6182e080f0fe70384d78c18a523e12d4fc6dfcc1609ab805c9b079c770f76ff4c936eb65fb61e9da42f4810111494d171ae31a824c6e5ddd52aa1d98913c9b6b4d4dac50fd2c9ff59c4d8449677fb81d78a2dac186570e4953d7353153262ee19962afce0635394d8fda13cd726c7454c604deb0a2d43133812e59645ed8b373c85eb75e59fc6184bb7b22cf58b120766d415f819a26e8c16f7571e8992dfe094b727622227d8879ce66ebe148570718c883fc223d604d593bee3bd5e6c7e270c06a40f2bb3d149a8ede010cc363813aae8efbd2e099b1f1086705c33461c6d835620bc353fca0bb4f5516383e6ffb5c9499fb807d8e2f5ac1c5cd14fcf74fd7924450c1697da8547435224f7b5d1c3f7aa4fb20f6ed8429a2fca7fb4b9419a1d67aab5189bdfc590defd6b5cd4a8cf6e278fc70f2c5883a38fce34c2cb9f548821b4e458788a2461dbeb6880806973abcbf7711f1c16f77a5f894a0dcd3cb2ab09c520c3a06f039f7f1c79542214fcece770671f8bde82874f460c9c3a58e29877bc57e430092a11b2468bd080c48e2d5ca9d220c969d840b2209e245a815499db31b49735dba5e30ecb212c0b3f7a7c3bca682c0c32cea61a77d23050eb793821bdfe955a8308aba22bb7d62643ee04cd68231dcd9e42895d3842791851ee9beb747aeeede9bcb10e35487920cb53fa723fdc052163ab3eff67f7073a95ed1e88094d7bff8ec3f86322df9eb9ef96da636a4e1a4b83cd9887a6d91a3a704ee77d811f9a7412f1c836cfcc758c7bfb70bdebba13eb87e7acebdb2bcdbc20c9df9d784cc2a3b29a01dcf7ab8c055ca4cf4712551874ef12f85a6f3353ed854dd1dc9c6ba92a4f91c2e391c284362642e6d06e3d1bb19c51230b9498ca9953bffa675a2d37da079f5d75fdafbf304d3ffe5f02872706fb84fb1bacd7e8341a72fa5b7cf2cac364d652397fff3b8104b3a858ace1263f349bd299d89116dc6c2d59c2ffff42fa048cceeed69dbace45f64d0abb4925776db2aa18044a4584bde7b92a830bbb30a4434ba42d3c3a1b8738f1bcc9b124afa6c981963003742d0fa218920618336743f9c85983dab968c87b966ed81915a41c473c1709e0341d7f4c4c8137b98149dfba453f30bd4d51dbf8319b15c90ce18bbe98b1a41b989dd31a2d939347c65484e7a39712313a6836a96dacb30e09fa22335d510f4f94794ed220ad674a2e44112b83d6ff36beb1cd2d90bfdf6909248760f1b6ed1303c2fdba8ebec401735e71b078180c83cd16bba65b8c609a"
}
Output - Public Inputs
{
'Verifier.toml': 'return = ["0x0000000000000000000000000000000000000000000000000000000000000005", "0x0000000000000000000000000000000000000000000000000000000000000011", "0x0000000000000000000000000000000000000000000000000000000000000012", "0x000000000000000000000000000000000000000000000000000000000000000c", "0x0000000000000000000000000000000000000000000000000000000000000006", "0x0000000000000000000000000000000000000000000000000000000000000004", "0x0000000000000000000000000000000000000000000000000000000000000001", "0x0000000000000000000000000000000000000000000000000000000000000009", "0x000000000000000000000000000000000000000000000000000000000000000b", "0x0000000000000000000000000000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000000000000000000000000010", "0x0000000000000000000000000000000000000000000000000000000000000007", "0x000000000000000000000000000000000000000000000000000000000000000a", "0x000000000000000000000000000000000000000000000000000000000000000d", "0x0000000000000000000000000000000000000000000000000000000000000003", "0x0000000000000000000000000000000000000000000000000000000000000013", "0x000000000000000000000000000000000000000000000000000000000000000f", "0x0000000000000000000000000000000000000000000000000000000000000008", "0x0000000000000000000000000000000000000000000000000000000000000002", "0x000000000000000000000000000000000000000000000000000000000000000e"]\n'
}
The proof detail response contains everything that we need to verify the proof locally. Our final act within the script is to route the relevant fields from the proof detail response into the usual locations of a Nargo project so that we can verify locally.
Verify​
If you do not already have Nargo, you can follow Noir's installation instructions to download and install the command line utility that will verify our proof.
Once we have moved in the appropriate directory, we use Nargo to read the contents of ./proofs/pagerank.proof
and Verifier.toml
and perform verification.
cd circuits/
nargo verify
# Print the return code of the previous command
echo $?
The output of the codeblock above is intended to be 0
for a successful proof.
If you alter the contents of either the proof or the public data, then you will see the specific error as well as a nonzero return code.
Output - Verification
0
While the verification route provided above is expedient for learning and developing against Sindri's API, it is not recommended that you follow this step when building an end-to-end ZK app. For that use case, you may find the NoirJS package useful.