# Sagittal MSM: A Streamlined MSM Implementation

*TL;DR — Our streamlined approach, which has a deterministic ordering of bucket updates, 3 orders of magnitude fewer buckets, and removes the logic for the double/double-add, queuing, scheduling, and conflict detection/resolution, results in at least a 90% reduction in the number of EC point additions when compared to the pseudo-naive approach.*

Sindri Labs developed a streamlined implementation of the Multi-Scalar-Multiply (MSM) algorithm that computes the inner product of a vector of large (e.g. 256 bit) scalars with a vector of points on an elliptic curve (EC) whose coordinates are in the finite field Fq with q roughly 384 bits in size. Our improvements are targeted at GPU and FPGA implementations.

The well-known algorithm of Pippenger (Bucket Algorithm) allows the computation to use a remarkably small number of EC point additions, but at a cost. Proposed implementations require maintaining on the order of $2^{20}$ partial sums (buckets) of EC points. Moreover, the updates to the partial sums are performed in a random order, and can cause stalls in feeding the EC point addition pipeline if point additions involving the same bucket are not separated in time by at least the length of the pipeline. Conflict resolution strategies require relatively complex logic.

Our proposed solution falls short of the theoretical minimum number of EC point additions required for an MSM with vectors of length in excess of $2^{20}$, but it has several main advantages that will make up for it.

Our solution:

- Has a simple, deterministic ordering of bucket updates. This guarantees no conflicts and obviates the need for logic to detect and resolve them (queues and schedulers) that the Pippenger approach requires.
- Removes the logic of the double or double-add method for performing an efficient scalar multiply using additions.
- Utilizes projective coordinates in Montgomery form.
- Reduces the memory footprint for the buckets by three orders of magnitude; it uses $2^{8}$ buckets instead of $2^{20}$ buckets. With so few buckets, aggregating the buckets in the final phase is of negligible cost. Therefore, we can move our finalized buckets back to the host (CPU) and aggregate them there. On an FPGA implementation, this frees block RAM because we do not have to implement the aggregation algorithm on the device.
- Allows for the precomputation of EC point additions. This is a one-time computation for all proofs that use the same common reference string (CRS). It increases the size of the CRS, so it comes at the expense of either a higher data transfer rate to the device or a larger amount of device RAM. The amount of precomputed EC point additions is an adjustable parameter.

The combined effect of a reduced memory footprint and simpler control logic offers an opportunity to realize multiple EC point addition pipelines on the FPGA and a consequent improvement from parallelization. We anticipate that our EC point addition pipeline will be shorter than the schedule of partial sum updates. This means we can break up particular bottlenecks in the pipeline if it allows for a higher clock rate.

Furthermore, this streamlined approach is more adaptable to GPUs and may open the door to more performant GPU zk-prover systems because, with the omission of the performance hits of queues, scheduling functions, conflict resolution functions, caching, and double/double-add logic, our approach will allow for a much greater GPU utilization.

# Comparison Against the Pseudo-Naive Approach

**Pseudo-naive approach problem definition**

Each three-dimensional point, $G_k$, of an array of points $G$, needs to be scaled by adding to itself a corresponding number of times, $s_k$, from an array, $s$, of the same index size, $N$, as $G$. The sum of all the scaled points, $R_f$, needs to be computed and returned.

Let

$\large{ R_f = \sum_{k=0}^{N-1} s_k * G_k }$where $N$ is on the order of 100 million, $G_k$ is a three-element array (x,y,z) of 377 bit integers, $s_k$ is a single 253 bit integer, and $s_k * G_k$ is an elliptic curve point scalar multiply that utilizes the double/double-add method, which uses at most $2*s_k$ EC point additions ($1.5*s_k$ on average).

**Comparison**

Our approach results in at least a 90% reduction in the number of EC point additions when compared to the pseudo-naive approach described above.

Want to chat? Give us a visit at Sindri Labs.

# References

Aasaraai, Kaveh, et al. *FPGA Acceleration of Multi-Scalar Multiplication: CycloneMSM*. 2022, https://eprint.iacr.org/2022/1396.

Lu, Tao, et al. *CuZK: Accelerating Zero-Knowledge Proof with a Faster Parallel Multi-Scalar Multiplication Algorithm on Gpus*. 2022, https://eprint.iacr.org/2022/1321.

Xavier, Charles. F. *pipeMSM: Hardware Acceleration for Multi-Scalar Multiplication*. 2022, https://eprint.iacr.org/2022/999.