Some verification working

This commit is contained in:
jude 2023-03-24 16:53:02 +00:00
parent 07b1080b3d
commit 931b669529
6 changed files with 50 additions and 14 deletions

View File

@ -53,6 +53,10 @@ class ProofSessionProver {
}
}
get a() {
return mod_exp(this.rp, this.cipherText.pubKey.n, this.cipherText.pubKey.n ** 2n);
}
noise() {
return mod_exp(this.rp, this.cipherText.pubKey.n, this.cipherText.pubKey.n ** 2n);
}
@ -86,7 +90,7 @@ export class ReadOnlyCyphertext {
this.cyphertext = (this.cyphertext * c.cyphertext) % this.pubKey.n ** 2n;
}
prove(tag, plainText, a) {
prove(plainText, a) {
return new ProofSessionVerifier(this, plainText, a);
}
}

View File

@ -164,7 +164,7 @@ document.addEventListener("PROOF", async (ev) => {
// find the relevant entity
let region = Region.getRegion(data.region);
region.prove(data.plainText, data.noise());
region.verify(BigInt(data.plainText), BigInt(data.a));
}
});

View File

@ -1,3 +1,4 @@
import { socket } from "./main.js";
import { Packet } from "./packet.js";
const REGIONS = {};
@ -39,9 +40,12 @@ class Strength {
const data = ev.detail;
if (data.region === region && data.stage === "CHALLENGE") {
let z = proofSessionProver.prove(data.challenge);
let z = proofSessionProver.prove(BigInt(data.challenge));
socket.emit("message", Packet.createProof(region, z));
socket.emit(
"message",
Packet.createProof(region, "0x" + z.toString(16))
);
controller.abort();
}
},
@ -52,8 +56,8 @@ class Strength {
"message",
Packet.createProofConjecture(
region,
this.cipherText.plainText,
proofSessionProver.a
"0x" + this.cipherText.plainText.toString(),
"0x" + proofSessionProver.a.toString(16)
)
);
}
@ -72,7 +76,7 @@ class Strength {
const data = ev.detail;
if (data.region === region && data.stage === "PROOF") {
if (proofSessionVerifier.verify(data.z)) {
if (proofSessionVerifier.verify(BigInt(data.z))) {
console.log("verified");
this.assumedStrength = plainText;
controller.abort();
@ -86,7 +90,10 @@ class Strength {
socket.emit(
"message",
Packet.createProofChallenge(region, proofSessionVerifier.challenge)
Packet.createProofChallenge(
region,
"0x" + proofSessionVerifier.challenge.toString(16)
)
);
}
}
@ -147,7 +154,9 @@ export class Region {
}
}
prove() {}
prove() {
this.strength.prove(this.name);
}
verify(plainText, a) {
this.strength.verify(this.name, plainText, a);

View File

@ -154,6 +154,11 @@ export class Player {
this.totalStrength += 1;
// send proofs
for (let region of this.getRegions()) {
region.prove();
}
this.endTurn();
}

Binary file not shown.

View File

@ -385,6 +385,10 @@ Players should prove a number of properties of their game state to each other to
For (1), we propose the following communication sequence. The player submits pairs $(R, c_R)$ for each region they control, where $R$ is the region and $c_R$ is a ciphertext encoding the number of reinforcements to add to the region (which may be 0). Each player computes $c_{R_1} \cdot \ldots \cdot c_{R_n}$.
\subsection{Cheating with negative values}
A severe consideration is the ability to cheat with negative values.
\subsection{Shared random values}
A large part of Risk involves random behaviour dictated by rolling some number of dice. To achieve this, some fair protocol must be used to generate random values consistently across each peer without any peer being able to manipulate the outcomes.
@ -417,26 +421,40 @@ This is achieved through bit-commitment and properties of $\mathbb{Z}_n$. The pr
\end{tikzpicture}
\end{center}
Depending on how $N_A + N_B$ is then turned into a random value within a range, this system may be manipulated by an attacker who has some knowledge of how participants are generating their noise. As a basic example, suppose a random value within range is generated by taking $N_A + N_B \mod 3$, and participants are producing 2-bit noises. An attacker could submit a 3-bit noise with the most-significant bit set, in which case the odds of getting a 1 are significantly higher than the odds of a 0 or a 2. To avoid this problem, peers should agree beforehand on the number of bits to transmit, and truncate any values in the final stage that exceed this limit.
Depending on how $N_A + N_B$ is then turned into a random value within a range, this system may be manipulated by an attacker who has some knowledge of how participants are generating their noise. As a basic example, suppose a random value within range is generated by taking $N_A + N_B \mod 3$, and participants are producing 2-bit noises. An attacker could submit a 3-bit noise with the most-significant bit set, in which case the probability of the final result being a 1 are significantly higher than the probability of a 0 or a 2. This is a typical example of modular bias. To avoid this problem, peers should agree beforehand on the number of bits to transmit. Addition of noise will then operate modulo $2^\ell$, where $\ell$ is the agreed-upon number of bits.
The encryption function used must also guarantee the integrity of decrypted ciphertexts to prevent a malicious party creating a ciphertext which decrypts to multiple valid values through using different keys.
\begin{proposition}
The scheme shown is not manipulable by a single cheater.
With the above considerations, the scheme shown is not manipulable by a single cheater.
\end{proposition}
\begin{proof}
Suppose $P_1, \dots, P_{n-1}$ are honest participants, and $P_n$ is a cheater with desired outcome $O$.
Suppose $P_1, \dots, P_{n-1}$ are honest participants, and $P_n$ is a cheater with a desired outcome.
The encryption function $E_k$ holds the confidentiality property: that is, without $k$, $P_i$ cannot retrieve $m$ given $E_k(m)$.
In step 1, each participant $P_i$ commits $E_{k_i}(N_i)$. The cheater $P_n$ commits a constructed noise $E_{k_n}(N_n)$.
Each participant $P_i$ commits $N_i$. Then, the final value is $N_1 + \dots + N_{n-1} + N_n$.
The encryption function $E_k$ holds the confidentiality property: that is, without $k$, $P_i$ cannot retrieve $m$ given $E_k(m)$. So $P_n$'s choice of $N_n$ cannot be directed by other commitments.
The final value is dictated by the sum of all decrypted values. $P_n$ is therefore left in a position of choosing $N_n$ to control the outcome of $a + N_n$, where $a$ is selected uniformly at random from the abelian group $\mathbb{Z}_{2^\ell}$ for $\ell$ the agreed upon bit length.
As every element of this group is of order $2^\ell$, the distribution of $a + N_n$ is identical no matter the choice of $N_n$. So $P_n$ maintains no control over the outcome of $a + N_n$.
\end{proof}
This extends inductively to support $n-1$ cheating participants, even if colluding. Finally, we must consider how to reduce random noise to useful values.
\subsection{Avoiding modular bias}
The typical way to avoid modular bias is by resampling. To avoid excessive communication, resampling can be performed within the bit sequence by partitioning into blocks of $n$ bits and taking blocks until one falls within range. This is appropriate in the presented use case as random values need only be up to 6, so the probability of consuming over 63 bits of noise when resampling for a value in the range 0 to 5 is $\left(\frac{1}{4}\right)^{21} \approx 2.3 \times 10^{-13}$.
\subsection{Application to domain}
Random values are used in two places. \begin{itemize}
\item Selecting the first player.
\item Rolling dice.
\end{itemize}
\bibliography{Dissertation}
\end{document}