This commit is contained in:
jude 2023-04-17 13:30:34 +01:00
parent 1bd15f36e7
commit 659fcc389a
4 changed files with 45 additions and 41 deletions

View File

@ -1,6 +1,6 @@
import { mod_exp } from "./math.js"; import { mod_exp } from "./math.js";
export const KEY_SIZE = 2048; export const KEY_SIZE = 512;
export function cryptoRandom(bits) { export function cryptoRandom(bits) {
if (bits === undefined) { if (bits === undefined) {

View File

@ -21,7 +21,7 @@ function cryptoShuffle(l) {
return out; return out;
} }
const ROUNDS = 48; const ROUNDS = 24;
function getCoins(text) { function getCoins(text) {
// Construct verifier coins // Construct verifier coins

Binary file not shown.

View File

@ -45,13 +45,15 @@
\maketitle \maketitle
\chapter*{} %\chapter*{}
%
\begin{center} %\begin{center}
With thanks to Dr. Jim Laird and Dr. Guy McCusker. % With thanks to Dr. Jim Laird and Dr. Guy McCusker.
\end{center} %
% "Never create anything, it will be misinterpreted, it will chain you and follow you for the rest of your life." - Hunter S. Thompson
\clearpage %\end{center}
%
%\clearpage
\section{Outline} \section{Outline}
@ -127,7 +129,7 @@ If both parties were to collude and generate non-randomly, this protocol falls t
\subsection{Zero-knowledge proofs} \subsection{Zero-knowledge proofs}
Zero-knowledge proofs form a subset of minimum disclosure proofs, and beyond that, a subset of interactive proofs. Zero-knowledge proofs are typically defined by three properties: \begin{itemize} %todo ref Zero-knowledge proofs form a subset of minimum disclosure proofs, and beyond that, a subset of interactive proofs. Zero-knowledge proofs are defined by three properties: \begin{itemize}
\item \textbf{Completeness.} If the conjecture is true, an honest verifier will be convinced of its truth by a prover. \item \textbf{Completeness.} If the conjecture is true, an honest verifier will be convinced of its truth by a prover.
\item \textbf{Soundness.} If the conjecture is false, a cheating prover cannot convince an honest verifier (except with some small probability). \item \textbf{Soundness.} If the conjecture is false, a cheating prover cannot convince an honest verifier (except with some small probability).
\item \textbf{Zero-knowledge.} This is the condition for a minimum disclosure proof to be considered zero-knowledge. If the conjecture is true, the verifier cannot learn any other information besides the truthfulness. \item \textbf{Zero-knowledge.} This is the condition for a minimum disclosure proof to be considered zero-knowledge. If the conjecture is true, the verifier cannot learn any other information besides the truthfulness.
@ -149,7 +151,7 @@ The final point describes a proof as being \textit{computationally zero-knowledg
\item \textbf{Statistical.} A simulator produced by a verifier gives transcripts distributed identically, except for some constant number of exceptions. \item \textbf{Statistical.} A simulator produced by a verifier gives transcripts distributed identically, except for some constant number of exceptions.
\end{itemize} \end{itemize}
Some proofs described are \emph{honest-verifier} zero-knowledge proofs. In these circumstances, the verifier is required to act in accordance with the protocol for the simulator distribution to behave as expected. We consider verifiers as honest, as it appears they may only impede themselves by acting dishonestly. Some proofs described are \emph{honest-verifier} zero-knowledge proofs. In these circumstances, the verifier is required to act in accordance with the protocol for the simulator distribution to behave as expected.
\subsubsection{Games as graphs} \subsubsection{Games as graphs}
@ -228,6 +230,8 @@ This reduces the number of communications to a constant, even for varying number
The Fiat-Shamir heuristic \cite{fiatshamir} provides another method to reduce communication by constructing non-interactive zero-knowledge proofs using a random oracle. For ledgers, non-interactive zero-knowledge proofs are necessary, as the ledger must be resilient to a user going offline. This is not the same in our case, however non-interactive zero-knowledge proofs are still beneficial. The amount of communications can be reduced significantly, and it is easier to write code that can settle a non-interactive proof than an interactive proof. The Fiat-Shamir heuristic \cite{fiatshamir} provides another method to reduce communication by constructing non-interactive zero-knowledge proofs using a random oracle. For ledgers, non-interactive zero-knowledge proofs are necessary, as the ledger must be resilient to a user going offline. This is not the same in our case, however non-interactive zero-knowledge proofs are still beneficial. The amount of communications can be reduced significantly, and it is easier to write code that can settle a non-interactive proof than an interactive proof.
The downside of using the Fiat-Shamir heuristic in our implementation is that any third party can verify proofs. In some situations, we do not want this to be the case.
\subsubsection{Set membership proofs} \subsubsection{Set membership proofs}
Another approach to the problem is to use set membership, which is a widely considered problem in zero-proof literature. In this case, each region would be associated with a set of units from a public "pool" of units. Then, a player needs to prove the cardinality of a set, and the uniqueness/distinctness of its members. A number of constructs exist for analysing and proving in obscured sets. Another approach to the problem is to use set membership, which is a widely considered problem in zero-proof literature. In this case, each region would be associated with a set of units from a public "pool" of units. Then, a player needs to prove the cardinality of a set, and the uniqueness/distinctness of its members. A number of constructs exist for analysing and proving in obscured sets.
@ -256,7 +260,7 @@ Players trust RSA keys on a trust-on-first-use (TOFU) basis. TOFU is the same pr
\subsection{Paillier cryptosystem} \subsection{Paillier cryptosystem}
Paillier requires the calculation of two large primes for the generation of public and private key pairs. ECMAScript typically stores integers as floating point numbers, giving precision up to $2^{53}$. This is clearly inappropriate for the generation of sufficiently large primes. Similar to RSA, Paillier requires the calculation of two large primes for the generation of public and private key pairs. ECMAScript typically stores integers as floating point numbers, giving precision up to $2^{53}$. This is clearly inappropriate for the generation of sufficiently large primes.
In 2020, ECMAScript introduced \texttt{BigInt} \cite{tc39}, which are, as described in the spec, "arbitrary precision integers". Whilst this does not hold true in common ECMAScript implementations (such as Chrome's V8), these "big integers" still provide sufficient precision for the Paillier cryptosystem, given some optimisations and specialisations are made with regards to the Paillier algorithm and in particular the modular exponentiation operation. In 2020, ECMAScript introduced \texttt{BigInt} \cite{tc39}, which are, as described in the spec, "arbitrary precision integers". Whilst this does not hold true in common ECMAScript implementations (such as Chrome's V8), these "big integers" still provide sufficient precision for the Paillier cryptosystem, given some optimisations and specialisations are made with regards to the Paillier algorithm and in particular the modular exponentiation operation.
@ -270,9 +274,7 @@ The number of operations is dependent primarily on the size of the exponent. For
\subsection{Generating large primes} \subsection{Generating large primes}
I chose to use primes of length 2048 bits. This is a typical prime size for public-key cryptography, as this generates a modulus $n = pq$ of length 4096 bits. Generating primes is a basic application of the Rabin-Miller primality test \cite{RABIN1980128}. This produces probabilistic primes, however upon completing sufficiently many rounds of verification, the likelihood of these numbers actually not being prime is dwarfed by the likelihood of some other failure, such as hardware failure.
Generating these primes is a basic application of the Rabin-Miller primality test \cite{RABIN1980128}. This produces probabilistic primes, however upon completing sufficiently many rounds of verification, the likelihood of these numbers actually not being prime is dwarfed by the likelihood of some other failure, such as hardware failure.
\subsection{Public key} \subsection{Public key}
@ -288,7 +290,7 @@ The Paillier cryptosystem is otherwise generic over the choice of primes $p, q$.
Without loss of generality, assume $p > q$. Suppose $\gcd(pq, (p - 1)(q - 1)) \neq 1$. Then, $q \mid p - 1$. However, the bit-lengths of $p, q$ are identical. So $\frac{1}{2}(p - 1) < q$. This is a contradiction to $q \mid p - 1$ (as 2 is the smallest possible divisor), and so we must have $\gcd(pq, (p - 1)(q - 1)) = 1$ as required. Without loss of generality, assume $p > q$. Suppose $\gcd(pq, (p - 1)(q - 1)) \neq 1$. Then, $q \mid p - 1$. However, the bit-lengths of $p, q$ are identical. So $\frac{1}{2}(p - 1) < q$. This is a contradiction to $q \mid p - 1$ (as 2 is the smallest possible divisor), and so we must have $\gcd(pq, (p - 1)(q - 1)) = 1$ as required.
\end{proof} \end{proof}
As the prime generation routine generates primes of equal length, this property is therefore guaranteed. The next optimisation is to select $g = 1 + n$. As the prime generation routine generates primes of equal length, this property is therefore guaranteed. The next optimisation is to select the public parameter $g = 1 + n$.
\begin{proposition} \begin{proposition}
$1 + n \in \mathbb{Z}^*_{n^2}$. $1 + n \in \mathbb{Z}^*_{n^2}$.
@ -298,7 +300,7 @@ As the prime generation routine generates primes of equal length, this property
We see that $(1 + n)^n \equiv 1 \mod n^2$ from binomial expansion. So $1 + n$ is invertible as required. We see that $(1 + n)^n \equiv 1 \mod n^2$ from binomial expansion. So $1 + n$ is invertible as required.
\end{proof} \end{proof}
The selection of such $g$ is ideal, as the binomial expansion property helps to optimise exponentiation. Clearly, from the same result, $g^m = 1 + mn$. This operation is far easier to perform, as it can be performed without having to take the modulus to keep the computed value within range. The selection of such $g$ is ideal, as the binomial expansion property reduces the number of calculations needed during modular exponentiation. Clearly, from the same result, $g^m = 1 + mn$. This operation is far easier to perform than a general modular exponentiation, as it fits within the valid range of the \texttt{BigInt} type, so no intermediary steps are needed to perform the computation.
\subsection{Encryption} \subsection{Encryption}
@ -391,7 +393,7 @@ As this protocol must run many times during a game, we consider each operation o
\subsection{Proof system} \subsection{Proof system}
The first proof to discuss is that of \cite[Section~5.2]{damgard2003}, in which the authors give a protocol to prove knowledge that a ciphertext is an encryption of zero. The importance of using a zero-knowledge method for this is that it verifies knowledge to a single party. This party should be an honest verifier: this is an assumption we have made of the context, but in general this is not true, and so this provides an attack surface for colluding parties. The first proof to discuss is that of \cite[Section~5.2]{damgard2003}, in which the authors give an honest-verifier protocol to prove knowledge that a ciphertext is an encryption of zero.
The proof system presented is a Schnorr-style interactive proof for a given ciphertext $c$ being an encryption of zero. The proof system presented is a Schnorr-style interactive proof for a given ciphertext $c$ being an encryption of zero.
@ -456,7 +458,7 @@ Players should prove a number of properties of their game state to each other to
\subsection{Range proof} \subsection{Range proof}
\cite{Boudot2000EfficientPT}'s proof is a multi-round proof more similar in structure to the graph isomorphism proof presented in \cite{10.1145/116825.116852}. We select public parameter $\ell$ to be some sufficiently high value that a player's unit count should not exceed during play: an appropriate choice may be 1000. Select $n$ as the number of units that the player is defending with, or in the case of attacking, let $n$ be the number of units that the player is attacking with plus 1 (as is required by the rules of Risk). \cite{Boudot2000EfficientPT}'s proof is a multi-round proof more similar in structure to the graph isomorphism proof presented in \cite{10.1145/116825.116852}. We select public parameter $\ell$ to be some sufficiently high value that a player's unit count should not exceed during play: an appropriate choice may be 1000. Select $n$ as the number of units that the player is defending with, or in the case of attacking, let $n$ be the number of units that the player is attacking with plus 1 (as is required by the rules of Risk). %todo
\subsection{Cheating with negative values} \subsection{Cheating with negative values}
@ -497,6 +499,7 @@ Additionally, we can consider this protocol perfect zero-knowledge.
In the random oracle model, \hyperref[protocol1]{Protocol~\ref*{protocol1}} is perfect zero-knowledge. In the random oracle model, \hyperref[protocol1]{Protocol~\ref*{protocol1}} is perfect zero-knowledge.
\end{proposition} \end{proposition}
% todo this is not correct: the verifiers are not honest
\begin{proof} \begin{proof}
To prove perfect zero-knowledge, we require a polynomial-time algorithm $T^*$ such that for all verifiers and for all valid sets $S$, the set of transcripts $T(P, V, S) = T^*(S)$, and the distributions are identical. To prove perfect zero-knowledge, we require a polynomial-time algorithm $T^*$ such that for all verifiers and for all valid sets $S$, the set of transcripts $T(P, V, S) = T^*(S)$, and the distributions are identical.
@ -515,44 +518,43 @@ Additionally, we can consider this protocol perfect zero-knowledge.
It is preferred that these proofs can be performed with only a few communications: this issue is particularly prevalent here as this protocol requires multiple rounds to complete. The independence of each round on the next is a beneficial property, as it means the proof can be performed in parallel, so the prover transmits \textit{all} of their $\psi$'s, then the verifier transmits all of their challenges. However, still is the issue of performing proofs of zero. It is preferred that these proofs can be performed with only a few communications: this issue is particularly prevalent here as this protocol requires multiple rounds to complete. The independence of each round on the next is a beneficial property, as it means the proof can be performed in parallel, so the prover transmits \textit{all} of their $\psi$'s, then the verifier transmits all of their challenges. However, still is the issue of performing proofs of zero.
We can apply the Fiat-Shamir heuristic \cite{fiatshamir} to make proofs of zero non-interactive. In place of a random oracle, we use a cryptographic hash function. %todo cite & discuss We can apply the Fiat-Shamir heuristic \cite{fiatshamir} to make proofs of zero non-interactive. In place of a random oracle, we use a cryptographic hash function. We take the hash of some public parameters to prevent cheating by searching for some values that hash in a preferable manner. In this case, selecting $e = H(g, m, a)$ is a valid choice. To get a hash of desired length, an extendable output function such as SHAKE256 \cite{FIPS202} could be used. The library jsSHA \cite{jssha} provides an implementation of SHAKE256 that works within a browser.
We take the hash of some public parameters to prevent cheating by searching for some values that hash in a preferable manner. In this case, selecting $e = H(g, m, a)$ is a valid choice. To get a hash of desired length, an extendable output function such as SHAKE256 \cite{FIPS202} could be used.
The library jsSHA \cite{jssha} provides an implementation of SHAKE256 that works within a browser.
\section{Review} \section{Review}
\subsection{Random oracles} \subsection{Random oracles}
Various parts of the implementation use the random oracle model: in particular, the zero-knowledge proof sections. Various parts of the implementation use the random oracle model: in particular, the zero-knowledge proof sections. The random oracle model is theoretic, as according to the Church-Turing hypothesis, a machine cannot produce infinite truly random output with only finite input.
The random oracle model is used for two guarantees. The first is in the construction of truly random values that will not reveal information about the prover's state. In practice, a cryptographically secure pseudo-random number generator will suffice for this application, as CSPRNGs typically incorporate environmental data to ensure outputs are unpredictable \cite{random(4)}. The random oracle model is used for two guarantees. The first is in the construction of truly random values that will not reveal information about the prover's state. In practice, a cryptographically secure pseudo-random number generator will suffice for this application, as CSPRNGs typically incorporate environmental data to ensure outputs are unpredictable \cite{random(4)}.
The second is to associate a non-random value with a random value. In practice, a cryptographic hash function such as SHA-3 is used. This gives appropriately pseudo-random outputs that appear truly random, and additionally are assumed to be preimage resistant: a necessary property when constructing non-interactive proofs in order to prevent a prover manipulating the signature used to derive the proof. The second is to associate a non-random value with a random value. In practice, a cryptographic hash function such as SHAKE is used. This gives appropriately pseudo-random outputs that appear truly random, and additionally are assumed to be preimage resistant: a necessary property when constructing non-interactive proofs in order to prevent a prover manipulating the signature used to derive the proof.
\subsection{Efficiency} \subsection{Efficiency}
\subsubsection{Storage complexity} \subsubsection{Storage complexity}
Paillier ciphertexts are constant size, each $\sim$1.0kB in size (as they are taken modulo $n^2$, where $n$ is the product of two 2048 bit primes). This is small enough for the memory and network limitations of today. In this section, let $N$ be the size in bits of the modulus $n$. This is likely one of 1024, 2048, or 4096; depending on the size of the primes used to form the modulus.
The proof of zero uses two Paillier ciphertexts, a challenge of size 2048 bits, and a proof statement of size 4096 bits. In total, this is a constant size of $\sim$2.8kB. Paillier ciphertexts are constant size, each $2N$ in size (as they are taken modulo $n^2$). This is small enough for the memory and network limitations of today.
On the other hand, \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires multiple rounds. Assume that we use 42 rounds: this provides an acceptable level of soundness, with a cheat probability of $\left(\frac{1}{2}\right)^{-42} \approx 2.3 \times 10^{-13}$. Additionally, assume that there are 10 regions to verify. Each round then requires ten Paillier ciphertexts alongside ten proofs of zero. This results in a proof size of $\sim$1.7MB. Whilst this is still within current memory limitations, the network cost is extreme; and this value may exceed what can be reasonably operated on within a processor's cache. The interactive proof of zero uses two Paillier ciphertexts (each size $2N$), a challenge of size $N$, and a proof statement of size $N$. In total, this is a constant size of $6N$.
This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 19 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-19} \approx 1.9 \times 10^{-6}$, but the size would reduce considerably to $\sim$770kB. On the other hand, the non-interactive variant needs not communicate the challenge (as it is computed as a function of other variables). So the non-interactive proof size is $5N$.
This is all in an ideal situation without compression or signatures: in the implementation presented, the serialisation of a ciphertext is larger than this, since it serialises to a string of the hexadecimal representation and includes a digital signature for authenticity. The non-interactive \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires multiple rounds. Assume that we use 48 rounds: this provides a good level of soundness, with a cheat probability of $\left(\frac{1}{2}\right)^{-48} \approx 3.6 \times 10^{-15}$. Additionally, assume that there are five regions to verify. Each prover round then requires five Paillier ciphertexts, and each verifier round five non-interactive proofs of zero plus some negligible amount of additional storage for the bijection.
This results in a proof size of $(10N + 10N) \times 48 = 960N$. For key size $N = 2048$, this is $240kB$. This is a fairly reasonable size for memory and network, but this value may exceed what can be placed within a processor's cache, leading to potential slowdown during verification.
The size of the proof of zero communication is, in total, $3290 + 1744 + 2243$ characters, i.e $\sim$7.3kB. This is about 2-3 times larger than the ideal size. I discuss some solutions to this. This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 24 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-24} \approx 6.0 \times 10^{-8}$, but the size would reduce by approximately half.
\textbf{Compression.} One solution is to use string compression. String compression can reduce the size considerably, as despite the ciphertexts being random, the hex digits only account for a small amount of the UTF-8 character space. LZ-String, a popular JavaScript string compression library, can reduce the size of a single hex-encoded ciphertext to about 35\% of its original size. The consideration This is all in an ideal situation without compression or signatures: in the implementation presented, the serialisation of a ciphertext is larger than this, since it serialises to a string of the hexadecimal representation and includes a digital signature for authenticity. In JavaScript, encoding a byte string as hexadecimal should yield approximately a four times increase in size, as one byte uses two hexadecimal characters, which are encoded as UTF-16. Results for this are shown in \hyperref[table3]{Table~\ref*{table3}}. Some potential solutions are discussed here.
\textbf{Compression.} One solution is to use string compression. String compression can reduce the size considerably, as despite the ciphertexts being random, the hex digits only account for a small amount of the UTF-8 character space. LZ-String, a popular JavaScript string compression library, can reduce the size of a single hex-encoded ciphertext to about 35\% of its original size. This will result in some slowdown due to compression time however, but this is somewhat negligible in the face of the time taken to produce and verify proofs in the first place.
\textbf{Message format.} Another solution is to use a more compact message format, for example msgpack \cite{msgpack} (which also has native support for binary literals). \textbf{Message format.} Another solution is to use a more compact message format, for example msgpack \cite{msgpack} (which also has native support for binary literals).
\textbf{Smaller key size.} The size of ciphertexts depends directly on the size of the key. Using a smaller key will reduce the size of the ciphertexts linearly. \textbf{Smaller key size.} The size of ciphertexts depends directly on the size of the key. Using a smaller key will reduce the size of the ciphertexts linearly.
This only considers the network footprint. The other consideration is the memory footprint. The proof of zero requires auxiliary memory beyond the new values communicated. In particular, it must clone the ciphertext being proven, in order to prevent mutating the original ciphertext when multiplying by $g^{-m}$. %todo
\subsubsection{Time complexity} \subsubsection{Time complexity}
It is remarked that Paillier encryption performs considerably slower than RSA on all key sizes. \cite{paillier1999public} provides a table of theoretic results, suggesting that Paillier encryption can be over 1,000 times slower than RSA for the same key size. It is remarked that Paillier encryption performs considerably slower than RSA on all key sizes. \cite{paillier1999public} provides a table of theoretic results, suggesting that Paillier encryption can be over 1,000 times slower than RSA for the same key size.
@ -577,12 +579,10 @@ Timing results versus RSA are backed experimentally by my implementation. The fo
console.log(performance.measure("duration", "start", "end").duration) console.log(performance.measure("duration", "start", "end").duration)
\end{minted} \end{minted}
Performing 250 Paillier encrypts required 47,000ms. On the other hand, performing 250 RSA encrypts required just 40ms. Performing 250 Paillier encrypts required 47,000ms. On the other hand, performing 250 RSA encrypts required just 40ms. Results are shown in \hyperref[table1]{Table~\ref*{table1}}.
The speed of decryption is considerably less important in this circumstance, as Paillier ciphertexts are not decrypted during the execution of the program. The speed of decryption is considerably less important in this circumstance, as Paillier ciphertexts are not decrypted during the execution of the program.
There is little room for optimisation of the mathematics in Paillier encryption. Some possibilities are discussed below.
\textbf{Public parameter.} The choice of the public parameter $g$ can improve the time complexity by removing the need for some large modular exponentiation. Selection of $g = n + 1$ is good in this regard, as binomial theorem allows the modular exponentiation $g^m \mod n^2$ to be reduced to the computation $1 + nm \mod n^2$. \textbf{Public parameter.} The choice of the public parameter $g$ can improve the time complexity by removing the need for some large modular exponentiation. Selection of $g = n + 1$ is good in this regard, as binomial theorem allows the modular exponentiation $g^m \mod n^2$ to be reduced to the computation $1 + nm \mod n^2$.
\textbf{Caching.} As the main values being encrypted are 0 or 1, a peer could maintain a cache of encryptions of these values and transmit these instantly. Caching may be executed in a background "web worker". A consideration is whether a peer may be able to execute a timing-related attack by first exhausting a peer's cache of a known plaintext value, and then requesting an unknown value and using the time taken to determine if the value was sent from the exhausted cache or not. \textbf{Caching.} As the main values being encrypted are 0 or 1, a peer could maintain a cache of encryptions of these values and transmit these instantly. Caching may be executed in a background "web worker". A consideration is whether a peer may be able to execute a timing-related attack by first exhausting a peer's cache of a known plaintext value, and then requesting an unknown value and using the time taken to determine if the value was sent from the exhausted cache or not.
@ -595,6 +595,8 @@ Using this scheme alone reduced the time to encrypt by a half. Greater optimisat
In practice gains were closer to a reduction by a third, since in the modified scheme additional computation must be performed to attain the $r$ that would work with normal Paillier, in order to perform the zero-knowledge proofs from before. In practice gains were closer to a reduction by a third, since in the modified scheme additional computation must be performed to attain the $r$ that would work with normal Paillier, in order to perform the zero-knowledge proofs from before.
Other research such as \cite{10.1145/3485832.3485842} suggests ways to speed up encryption further that could be utilised.
\textbf{Smaller key size.} The complexity of Paillier encryption increases with key size. Using a smaller key could considerably reduce the time taken \cite{paillier1999public}. \textbf{Smaller key size.} The complexity of Paillier encryption increases with key size. Using a smaller key could considerably reduce the time taken \cite{paillier1999public}.
I tested this on top of the alternative Paillier scheme from above. This resulted in linear reductions in encryption time: encryption under a 1024-bit modulus took a sixth of the amount of time as under a 2048-bit modulus, and encryption under a 2048-bit modulus took a sixth of the amount of time as under a 4096-bit modulus. I tested this on top of the alternative Paillier scheme from above. This resulted in linear reductions in encryption time: encryption under a 1024-bit modulus took a sixth of the amount of time as under a 2048-bit modulus, and encryption under a 2048-bit modulus took a sixth of the amount of time as under a 4096-bit modulus.
@ -614,6 +616,7 @@ All measurements were taken on Brave 1.50.114 (Chromium 112.0.5615.49) 64-bit, u
\begin{table}[htp] \begin{table}[htp]
\fontsize{10pt}{10pt}\selectfont \fontsize{10pt}{10pt}\selectfont
\caption{Time to encrypt} \caption{Time to encrypt}
\label{table1}
\begin{tabularx}{\textwidth}{c *3{>{\Centering}X}} \begin{tabularx}{\textwidth}{c *3{>{\Centering}X}}
\toprule \toprule
Modulus size & Na\"ive encrypt & Jacobi encrypt & RSA encrypt \\ Modulus size & Na\"ive encrypt & Jacobi encrypt & RSA encrypt \\
@ -644,6 +647,7 @@ All measurements were taken on Brave 1.50.114 (Chromium 112.0.5615.49) 64-bit, u
\begin{table}[htp] \begin{table}[htp]
\fontsize{10pt}{10pt}\selectfont \fontsize{10pt}{10pt}\selectfont
\caption{Byte size\parnote{1 UTF-16 character, as used by ECMAScript \cite[Section~6.1.4]{ecma2024262}, is 2 or more bytes.} of encoded proofs} \caption{Byte size\parnote{1 UTF-16 character, as used by ECMAScript \cite[Section~6.1.4]{ecma2024262}, is 2 or more bytes.} of encoded proofs}
\label{table3}
\begin{tabularx}{\textwidth}{c *6{>{\Centering}X}} \begin{tabularx}{\textwidth}{c *6{>{\Centering}X}}
\toprule \toprule
\multirow{2}{*}{Modulus size} & \multicolumn{2}{c}{Proof-of-zero non-interactive} & \multicolumn{2}{c}{\hyperref[protocol1]{Protocol~\ref*{protocol1}} with $t = 24$} & \multicolumn{2}{c}{\hyperref[protocol1]{Protocol~\ref*{protocol1}} with $t = 48$} \tabularnewline \multirow{2}{*}{Modulus size} & \multicolumn{2}{c}{Proof-of-zero non-interactive} & \multicolumn{2}{c}{\hyperref[protocol1]{Protocol~\ref*{protocol1}} with $t = 24$} & \multicolumn{2}{c}{\hyperref[protocol1]{Protocol~\ref*{protocol1}} with $t = 48$} \tabularnewline
@ -662,13 +666,13 @@ All measurements were taken on Brave 1.50.114 (Chromium 112.0.5615.49) 64-bit, u
Paillier is broken if factoring large numbers is computationally feasible \cite[Theorem~9]{paillier1999public}. Therefore, it is vulnerable to the same quantum threat as RSA is, which is described by \cite{shor_1997}. Alternative homomorphic encryption schemes are available, which are believed to be quantum-resistant, as they are based on lattice methods (e.g, \cite{fhe}). Paillier is broken if factoring large numbers is computationally feasible \cite[Theorem~9]{paillier1999public}. Therefore, it is vulnerable to the same quantum threat as RSA is, which is described by \cite{shor_1997}. Alternative homomorphic encryption schemes are available, which are believed to be quantum-resistant, as they are based on lattice methods (e.g, \cite{fhe}).
\subsection{Side-channels} \subsection{Honest-verifier}
The specific implementation is likely vulnerable to side-channel attacks. The specification for BigInt does not specify that operations should be constant-time, and variation between browser engines could lead to timing attacks. %todo check as this might not be true since key sizes are the same \cite[Section~5.2]{damgard2003} is honest-verifier. However, applying the Fiat-Shamir heuristic converts such a proof into a general zero-knowledge proof \cite[Section~5]{fiatshamir}. This means that, supposing the choice of transform used is appropriate, \hyperref[protocol1]{Protocol~\ref*{protocol1}} should also be general zero-knowledge. However, the interactive proofs performed as part of the game are still only honest-verifier, and a malicious verifier may be able to extract additional information from the prover (such as the blinding value used: this is stated in \cite[Section~5.2]{damgard2003}).
\section{Wider application} \section{Wider application}
Peer-to-peer software is an area of software that has fallen somewhat out of interest in more recent years, as companies can afford to run their own centralised servers (although no doubt interest still exists: many users are preferring federated services over centralised services, such as Mastodon, Matrix, XMPP). %todo cite Peer-to-peer software is an area of software that has fallen somewhat out of interest in more recent years, as online service providers can afford to run their own centralised servers (although no doubt interest still exists: some users are preferring federated services over centralised services, such as Mastodon, Matrix, XMPP).
However, peer-to-peer solutions still have many benefits to end users: mainly being greater user freedom. I believe that the content presented here shows clear ways to expand peer-to-peer systems, and reduce dependence on centralised services. However, peer-to-peer solutions still have many benefits to end users: mainly being greater user freedom. I believe that the content presented here shows clear ways to expand peer-to-peer systems, and reduce dependence on centralised services.
I propose some ideas which could build off the content here. I propose some ideas which could build off the content here.
@ -681,7 +685,7 @@ This is not without its downsides: I found that the complexity of P2P networking
\subsection{Decentralised social media} \subsection{Decentralised social media}
The schemes presented here and in \cite{damgard2003} could be applies to the concept of a decentralised social media platform. Such a platform may use ZKPs as a way to allow for "private" profiles: the content of a profile may stay encrypted, but ZKPs could be used as a way to allow certain users to view private content in a manner that allows for repudiation, and disallows one user from sharing private content to unauthorised users. The schemes presented here and in \cite{damgard2003} could be applies to the concept of a decentralised social media platform. Such a platform may use zero-knowledge proofs as a way to allow for "private" profiles: the content of a profile may stay encrypted, but zero-knowledge proofs could be used as a way to allow certain users to view private content in a manner that allows for repudiation, and disallows one user from sharing private content to unauthorised users.
The obvious issue is P2P data storage. Users could host their own platforms, but this tends to lead to low adoption due to complexity for normal people. IPFS is a P2P data storage protocol that could be considered. This poses an advantage that users can store their own data, if they have a large amount, but other users can mirror data effectively to protect against outages. The amount of storage can grow effectively as more users join the network. The obvious issue is P2P data storage. Users could host their own platforms, but this tends to lead to low adoption due to complexity for normal people. IPFS is a P2P data storage protocol that could be considered. This poses an advantage that users can store their own data, if they have a large amount, but other users can mirror data effectively to protect against outages. The amount of storage can grow effectively as more users join the network.