Fix bug where ciphertexts could end up negative
This commit is contained in:
parent
474fe9f70a
commit
9d1d64f1d9
@ -28,6 +28,14 @@ class Cyphertext {
|
||||
this.cyphertext = (this.cyphertext * c.cyphertext) % this.pubKey.n ** 2n;
|
||||
this.r = (this.r * c.r) % this.pubKey.n ** 2n;
|
||||
this.plainText += c.plainText;
|
||||
|
||||
// Force into range
|
||||
while (this.cyphertext < 0n) {
|
||||
this.cyphertext += this.pubKey.n ** 2n;
|
||||
}
|
||||
while (this.r < 0n) {
|
||||
this.r += this.pubKey.n ** 2n;
|
||||
}
|
||||
}
|
||||
|
||||
toString() {
|
||||
@ -88,6 +96,11 @@ export class ReadOnlyCyphertext {
|
||||
|
||||
update(c) {
|
||||
this.cyphertext = (this.cyphertext * c.cyphertext) % this.pubKey.n ** 2n;
|
||||
|
||||
// Force into range
|
||||
while (this.cyphertext < 0n) {
|
||||
this.cyphertext += this.pubKey.n ** 2n;
|
||||
}
|
||||
}
|
||||
|
||||
prove(plainText, a) {
|
||||
@ -106,14 +119,16 @@ class ProofSessionVerifier {
|
||||
|
||||
verify(proof) {
|
||||
// check coprimality
|
||||
if (gcd(proof, this.cipherText.pubKey.n) !== 1n) return false;
|
||||
if (gcd(this.cipherText.cyphertext, this.cipherText.pubKey.n) !== 1n)
|
||||
return false;
|
||||
if (gcd(this.a, this.cipherText.pubKey.n) !== 1n) return false;
|
||||
if (gcd(proof, this.cipherText.pubKey.n) !== 1n) return -1;
|
||||
if (gcd(this.cipherText.cyphertext, this.cipherText.pubKey.n) !== 1n) return -2;
|
||||
if (gcd(this.a, this.cipherText.pubKey.n) !== 1n) return -3;
|
||||
|
||||
// check exp
|
||||
return (
|
||||
mod_exp(proof, this.cipherText.pubKey.n, this.cipherText.pubKey.n ** 2n) ===
|
||||
return mod_exp(
|
||||
proof,
|
||||
this.cipherText.pubKey.n,
|
||||
this.cipherText.pubKey.n ** 2n
|
||||
) ===
|
||||
(this.a *
|
||||
mod_exp(
|
||||
this.cipherText.cyphertext,
|
||||
@ -121,7 +136,8 @@ class ProofSessionVerifier {
|
||||
this.cipherText.pubKey.n ** 2n
|
||||
)) %
|
||||
this.cipherText.pubKey.n ** 2n
|
||||
);
|
||||
? 1
|
||||
: -4;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -76,12 +76,12 @@ class Strength {
|
||||
const data = ev.detail;
|
||||
|
||||
if (data.region === region && data.stage === "PROOF") {
|
||||
if (proofSessionVerifier.verify(BigInt(data.z))) {
|
||||
console.log("verified");
|
||||
let result = proofSessionVerifier.verify(BigInt(data.z));
|
||||
if (result > 0) {
|
||||
this.assumedStrength = plainText;
|
||||
controller.abort();
|
||||
} else {
|
||||
console.warn("Failed to verify ciphertext!");
|
||||
console.warn(`Failed to verify ciphertext! ${result}`);
|
||||
}
|
||||
}
|
||||
},
|
||||
|
Binary file not shown.
@ -331,6 +331,10 @@ We are also interested in the ability to compute $\mu = \lambda^{-1} \mod n$ as
|
||||
|
||||
Let $c$ be the ciphertext. The corresponding plaintext is computed as $m = L(c^\lambda \mod n^2) \cdot \mu \mod n$, where $L(x) = \frac{x - 1}{n}$. This is relatively simple to compute in JavaScript.
|
||||
|
||||
\subsection{Implementation details}
|
||||
|
||||
Paillier is implemented by four classes: \texttt{PubKey}, \texttt{PrivKey}, \texttt{Ciphertext}, and \texttt{ReadOnlyCiphertext}. \texttt{PubKey.encrypt} converts a \texttt{BigInt} into either a \texttt{Ciphertext} or a \texttt{ReadOnlyCiphertext} by the encryption function above. The distinction between these is that a \texttt{ReadOnlyCiphertext} does not know the random $r$ that was used to form it, and so is created by decrypting a ciphertext that originated with another peer. A regular \texttt{Ciphertext} maintains knowledge of $r$ and the plaintext it enciphers. This makes it capable of proving by the scheme presented below.
|
||||
|
||||
\subsection{Proof system}
|
||||
|
||||
The proof system is that of \cite{damgard2003}. The authors give a method to prove knowledge of an encrypted value. The importance of using a zero-knowledge method for this is that it verifies knowledge to a single party. This party should be an honest verifier: this is an assumption we have made of the context, but in general this is not true, and so this provides an attack surface for colluding parties.
|
||||
@ -369,6 +373,14 @@ A proof for the following homologous problem can be trivially constructed: given
|
||||
|
||||
\subsection{Implementation details}
|
||||
|
||||
Proofs of zero use messages labelled as \texttt{"PROOF"} to resolve, and resolve between two parties. The proof is initialised by the prover as part of the game protocol, who sends an initial message containing the fields \texttt{conjecture: int} and \texttt{a: str} (where \texttt{a} is the serialisation of a \texttt{BigInt} representing \texttt{a} and \texttt{conjecture} is the proposed plaintext).
|
||||
|
||||
The prover then registers a new event listener to respond to the verifier's challenge in a non-blocking way when received.
|
||||
|
||||
The verifier receives the message above, and responds with a random challenge selected by generating a cryptographically secure pseudorandom number of 2048 bits, and then dropping the LSB. Using 2047 bits guarantees that the challenge is smaller than $p$ or $q$, as is suggested in the original paper. %todo why?
|
||||
The verifier then registers a new event listener to receive the prover's proof.
|
||||
|
||||
Verifying the proof is a simple application of extended Euclidean algorithm to check coprimality, and a modular exponentiation and reduction to check the final equivalence. The ciphertext on the verifier's instance is then tagged with the proven plaintext (should the proof succeed). This tag is removed in the case that the ciphertext is updated.
|
||||
|
||||
\subsection{Application to domain}
|
||||
|
||||
@ -377,24 +389,26 @@ Players should prove a number of properties of their game state to each other to
|
||||
|
||||
\item The number of units on a region neighbouring another player.
|
||||
|
||||
\item The number of units lost during an attack/defence.
|
||||
|
||||
\item The number of units available for an attack/defence.
|
||||
|
||||
\item The number of units lost during an attack/defence (including total depletion of units and loss of the region).
|
||||
|
||||
\item The number of units moved when fortifying.
|
||||
\end{enumerate}
|
||||
|
||||
(4) and (5) can be generalised further as range proofs.
|
||||
(2) and (4) are both covered by the proof above. (3) is okay between two players, as it is a subcase of (2). But in the case of more players, the availability of units should be proven. One way to achieve this is with a range proof.
|
||||
|
||||
For (1), we propose the following communication sequence. The player submits pairs $(R, c_R)$ for each region they control, where $R$ is the region and $c_R$ is a ciphertext encoding the number of reinforcements to add to the region (which may be 0). Each player computes $c_{R_1} \cdot \ldots \cdot c_{R_n}$.
|
||||
\cite{Boudot2000EfficientPT} demonstrates a proof that some given ciphertext lies within an interval $[-\ell, 2\ell]$, where $\ell$ is some public value. This proof can easily be manipulated into a proof that a value lies within the interval $[n, 3\ell + n]$ from the additive homomorphic property. By selecting a sufficiently high $\ell$ and appropriate $n$, this proof is appropriate for proving to other players that the number of units being used in an attack is valid.
|
||||
|
||||
\subsection{Range proof}
|
||||
|
||||
\cite{Boudot2000EfficientPT}'s proof is a multi-round proof more similar to %todo
|
||||
|
||||
\subsection{Cheating with negative values}
|
||||
|
||||
By using negative values, a player can cheat their unit count during reinforcements. This is a severe issue, as potentially the cheat could be completely unnoticed even in the conclusion of the game. To overcome this, we apply a range proof for each committed value.
|
||||
By using negative values, a player can cheat stage (1) of the above. This is a severe issue, as potentially the cheat could be completely unnoticed even in the conclusion of the game. To overcome this, we apply proofs on each committed value that are verified by all players.
|
||||
|
||||
\cite{Boudot2000EfficientPT} demonstrates a proof that some given ciphertext lies within an interval $[-\ell, 2\ell]$, where $\ell$ is some public value. This proof can easily be manipulated into a proof that a value lies within the interval $[0, 3\ell]$ from the additive homomorphic property.
|
||||
|
||||
The combination of proofs is then to prove that the sum of all ciphertexts is 1, and the range of each ciphertext is as tight as possible, which is within the range $[0, 3]$ from the proof above. This is acceptable in the specific application, however we can achieve a better proof that is similar in operation to \cite{Boudot2000EfficientPT}.
|
||||
One consideration is to use a range proof as above. The full proof would then be the combination of a proof that the sum of all ciphertexts is 1, and the range of each ciphertext is as tight as possible, which is within the range $[0, 3]$. This is acceptable in the specific application, however we can achieve a better proof that is similar in operation to \cite{Boudot2000EfficientPT}.
|
||||
|
||||
Instead of proving a value is within a range, the prover will demonstrate that a bijection exists between the elements in the reinforcement set and a challenge set.
|
||||
|
||||
@ -402,15 +416,15 @@ Instead of proving a value is within a range, the prover will demonstrate that a
|
||||
The prover transmits the set \begin{align*}
|
||||
S = \{ (R_1, E(n_1, r_1)), \dots, (R_N, E(n_N, r_N)) \}
|
||||
\end{align*} as their reinforcement step. Verifier wants that the second projection of this set maps to 1 exactly once.
|
||||
|
||||
|
||||
Run $t$ times in parallel:
|
||||
|
||||
|
||||
\begin{enumerate}
|
||||
\item Prover transmits $\{ (\psi(R_i), E(n_i, r_i^*)) \mid 0 < i \leq N \}$ where $\psi$ is a random bijection on the regions.
|
||||
|
||||
|
||||
\item Verifier chooses a random $c \in \{0, 1\}$. \begin{enumerate}
|
||||
\item If $c = 0$, the verifier requests the definition of $\psi$, to indeed see that this is a valid bijection. They then compute the product of the $E(x, r_1) \cdot E(x^*, r_1^*)$ and verify proofs that each of these is zero.
|
||||
|
||||
|
||||
\item If $c = 1$, the verifier requests a proof that each $E(n_i, r_i^*)$ is as claimed.
|
||||
\end{enumerate}
|
||||
\end{enumerate}
|
||||
@ -431,7 +445,7 @@ Additionally, we can consider this protocol perfect zero-knowledge.
|
||||
|
||||
\begin{proof}
|
||||
To prove perfect zero-knowledge, we require a polynomial-time algorithm $T^*$ such that for all verifiers and for all valid sets $S$, the set of transcripts $T(P, V, S) = T^*(S)$, and the distributions are identical.
|
||||
|
||||
|
||||
Such a $T^*$ can be defined for any $S$. \begin{enumerate}
|
||||
\item Choose a random $\psi'$ from the random oracle.
|
||||
\item Choose random $(r_i^*)'$ from the random oracle.
|
||||
@ -453,23 +467,23 @@ This is achieved through bit-commitment and properties of $\mathbb{Z}_n$. The pr
|
||||
\begin{tikzpicture}[
|
||||
every node/.append style={very thick,rounded corners=0.1mm}
|
||||
]
|
||||
|
||||
|
||||
\node[draw,rectangle] (A) at (0,0) {Peer A};
|
||||
|
||||
|
||||
\node[draw,rectangle] (B) at (6,0) {Peer B};
|
||||
|
||||
|
||||
\node[draw=blue!50,rectangle,thick,text width=4cm] (NoiseA) at (0,-1.5) {Generate random noise $N_A$, random key $k_A$};
|
||||
\node[draw=blue!50,rectangle,thick,text width=4cm] (NoiseB) at (6,-1.5) {Generate random noise $N_B$, random key $k_B$};
|
||||
|
||||
|
||||
\draw [->,very thick] (0,-3)--node [auto] {$E_{k_A}(N_A)$}++(6,0);
|
||||
\draw [<-,very thick] (0,-4)--node [auto] {$E_{k_B}(N_B)$}++(6,0);
|
||||
|
||||
|
||||
\draw [->,very thick] (0,-5)--node [auto] {$k_A$}++(6,0);
|
||||
\draw [<-,very thick] (0,-6)--node [auto] {$k_B$}++(6,0);
|
||||
|
||||
|
||||
\node[draw=blue!50,rectangle,thick] (CA) at (0,-7) {Compute $N_A + N_B$};
|
||||
\node[draw=blue!50,rectangle,thick] (CB) at (6,-7) {Compute $N_A + N_B$};
|
||||
|
||||
|
||||
\draw [very thick] (A)-- (NoiseA)-- (CA)-- (0,-7);
|
||||
\draw [very thick] (B)-- (NoiseB)-- (CB)-- (6,-7);
|
||||
\end{tikzpicture}
|
||||
@ -485,13 +499,13 @@ The encryption function used must also guarantee the integrity of decrypted ciph
|
||||
|
||||
\begin{proof}
|
||||
Suppose $P_1, \dots, P_{n-1}$ are honest participants, and $P_n$ is a cheater with a desired outcome.
|
||||
|
||||
In step 1, each participant $P_i$ commits $E_{k_i}(N_i)$. The cheater $P_n$ commits a constructed noise $E_{k_n}(N_n)$.
|
||||
|
||||
|
||||
In step 1, each participant $P_i$ commits $E_{k_i}(N_i)$. The cheater $P_n$ commits a constructed noise $E_{k_n}(N_n)$.
|
||||
|
||||
The encryption function $E_k$ holds the confidentiality property: that is, without $k$, $P_i$ cannot retrieve $m$ given $E_k(m)$. So $P_n$'s choice of $N_n$ cannot be directed by other commitments.
|
||||
|
||||
|
||||
The final value is dictated by the sum of all decrypted values. $P_n$ is therefore left in a position of choosing $N_n$ to control the outcome of $a + N_n$, where $a$ is selected uniformly at random from the abelian group $\mathbb{Z}_{2^\ell}$ for $\ell$ the agreed upon bit length.
|
||||
|
||||
|
||||
As every element of this group is of order $2^\ell$, the distribution of $a + N_n$ is identical no matter the choice of $N_n$. So $P_n$ maintains no control over the outcome of $a + N_n$.
|
||||
\end{proof}
|
||||
|
||||
@ -508,7 +522,9 @@ Random values are used in two places. \begin{itemize}
|
||||
\item Rolling dice.
|
||||
\end{itemize}
|
||||
|
||||
\section{Implementation review}
|
||||
\subsection{Web as a platform}
|
||||
|
||||
\section{Review}
|
||||
|
||||
\subsection{Random oracles}
|
||||
|
||||
@ -526,7 +542,14 @@ On the other hand, \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires mult
|
||||
|
||||
This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 19 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-19} \approx 1.9 \times 10^{-6}$, but the size would reduce considerably to $\sim$770kB.
|
||||
|
||||
Another potential solution is to change the second challenge's structure to only verify a single random ciphertext each turn. This would approximately half the amount of data required at the expense of soundness. However, the advantage over lowering the number of rounds is that the change in soundness is dependent on the number of items in the set being verified. By a tactful selection, a high probability of soundness can still be maintained. %todo
|
||||
Another potential solution is to change the second challenge's structure to only verify a single random ciphertext each turn. This would approximately half the amount of data required at the expense of soundness. However, the advantage over lowering the number of rounds is that the change in soundness is dependent on the number of items in the set being verified. By a tactful selection, a high probability of soundness can still be maintained.
|
||||
|
||||
To see this, we consider the ways in which a prover may try to "cheat" a proof. The prover wishes to submit a set $S$ which contains negative values, so the sum of the values is still 1 but multiple non-zero values exist. %todo
|
||||
|
||||
\subsubsection{Time complexity}
|
||||
|
||||
It is generally remarked that Paillier performs considerably slower than RSA on all key sizes. %todo cite
|
||||
|
||||
|
||||
\subsection{Quantum resistance}
|
||||
|
||||
@ -538,7 +561,49 @@ The specific implementation is likely vulnerable to side-channel attacks. The sp
|
||||
|
||||
\section{Wider application}
|
||||
|
||||
Peer-to-peer software is a slowly growing field.
|
||||
Peer-to-peer software is an area of software that has fallen somewhat out of interest in more recent years, as companies can afford to run their own centralised servers (although no doubt interest still exists: many users are preferring federated services over centralised services, such as Mastodon, Matrix, XMPP). %todo cite
|
||||
However, peer-to-peer solutions still have many benefits to end users: mainly being greater user freedom. I believe that the content presented here shows clear ways to expand peer-to-peer systems, and reduce dependence on centralised services.
|
||||
|
||||
I propose some ideas which could build off the content here.
|
||||
|
||||
\subsection{Larger scale P2P games}
|
||||
|
||||
Presented here was a basic implementation of a reduced rule-set version of the board game Risk. However, many other games exist that the same transformation could be applied to. Games of larger scale with a similar structure, such as Unciv, could benefit from peer-to-peer networking implemented in a similar manner.
|
||||
|
||||
This is not without its downsides: I found that the complexity of P2P networking is far greater than a standard centralised model. This would be a considerable burden on the developers, and could hurt the performance of such a game. The time taken to process and verify proofs also makes this inapplicable to games that are real-time.
|
||||
|
||||
\subsection{Decentralised social media}
|
||||
|
||||
\cite{damgard2003} uses Paillier to implement electronic voting. Whilst electronic voting for political candidates has fallen out of favour, %todo cite
|
||||
the schemes could still be useful in the concept of a decentralised social media platform. Such a platform may use ZKPs as a way to allow for "private" profiles: the content of a profile may stay encrypted, but ZKPs could be used as a way to allow certain users to view private content in a manner that allows for repudiation, and disallows one user from sharing private content to unauthorised users.
|
||||
|
||||
The obvious issue is P2P data storage. Users could host their own platforms, but this tends to lead to low adoption due to complexity for normal people. IPFS is a P2P data storage protocol that could be considered. This poses an advantage that users can store their own data, if they have a large amount, but other users can mirror data effectively to protect against outages. The amount of storage can grow effectively as more users join the network.
|
||||
|
||||
\subsection{Handling of confidential data}
|
||||
|
||||
The ability to prove the contents of a dataset to a second party without guaranteeing authenticity to a third party is another potential application of the protocol presented. Handling of confidential data is a critical concern for pharmaceutical companies, where a data leak imposes serious legal and competitive consequences for the company. A second party does however need some guarantee that the data received is correct. Proofs are one way of achieving this, although other techniques such as keyed hashing may be more effective.
|
||||
|
||||
Another consideration in this domain is the use of fully-homomorphic encryption schemes to allow a third party to process data without actually viewing the data.
|
||||
|
||||
\section{Limitations}
|
||||
|
||||
Finally, I present the limitations that I encountered.
|
||||
|
||||
\subsection{JavaScript related}
|
||||
|
||||
To summarise, JavaScript was the incorrect choice of language for this project. Whilst the event-based methodology was useful, I believe overall that JavaScript hampered development.
|
||||
|
||||
JavaScript is a slow language. Prime generation takes a considerable amount of time, and this extends to encryption and decryption being slower than in an implementation in an optimising compiled language.
|
||||
|
||||
JavaScript's type system makes debugging difficult. It is somewhat obvious that this problem is far worse in systems with more interacting parts, which this project certainly was. TypeScript may have been a suitable alternative, but most likely the easiest solution was to avoid both and go with a language that was designed with stronger typing in mind from the outset (even Python would likely have been easier, as there is at least no issue of \texttt{undefined}ness, and the language was designed with objects in mind from the start).
|
||||
|
||||
\subsection{General programming}
|
||||
|
||||
Peer-to-peer programming requires a lot more care than client-server programming. This makes development far slower and far more bug-prone. As a simple example, consider the action of taking a turn in Risk. In the peer-to-peer implementation presented, each separate peer must keep track of how far into a turn a player is, check if a certain action would end their turn (or if its invalid), contribute in verifying proofs, and contribute in generating randomness for dice rolls. In a client-server implementation, the server would be able to handle a turn by itself, and could then propagate the results to the other clients in a single predictable request.
|
||||
|
||||
\subsection{Resources}
|
||||
|
||||
The peer-to-peer implementation requires more processing power and more bandwidth on each peer than a client-server implementation would. This is the main limitation of the peer-to-peer implementation. The program ran in a reasonable time, using a reasonable amount of resources on the computers I had access to, but these are not representative of the majority of people. Using greater processing power increases power consumption, which is definitely undesirable. In a client-server implementation, even with an extra computer, I predict that the power consumption should be lower than the peer-to-peer implementation presented. %todo justify
|
||||
|
||||
\bibliography{Dissertation}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user