This commit is contained in:
jude 2023-04-14 10:55:20 +01:00
parent d3e309c1e3
commit 0f8ad2a0a8
4 changed files with 48 additions and 6 deletions

View File

@ -70,6 +70,10 @@ class Ciphertext {
return "0x" + this.cipherText.toString(16);
}
toJSON() {
return "0x" + this.cipherText.toString(16);
}
prove() {
return new ValueProofSessionProver(this);
}

View File

@ -23,12 +23,14 @@ function cryptoShuffle(l) {
window.cryptoShuffle = cryptoShuffle;
const ROUNDS = 24;
function proveRegions(regions) {
// Construct prover coins
let coins = [];
let regionNames = Object.keys(regions);
for (let x = 0; x < 40; x++) {
for (let x = 0; x < ROUNDS; x++) {
let psi = cryptoShuffle(structuredClone(regionNames)).join("");
let newRegions = structuredClone(regions);
// rearrange keys
@ -36,7 +38,7 @@ function proveRegions(regions) {
let c = regions[psi[index]].clone();
// re-blind
c.update(c.pubKey.encrypt(0n));
newRegions[regionNames[index]] = c.toString();
newRegions[regionNames[index]] = c;
}
coins.push(newRegions);
}
@ -44,10 +46,25 @@ function proveRegions(regions) {
// Construct verifier coins
let hasher = new jsSHA("SHA3-256", "TEXT");
hasher.update(JSON.stringify(coins));
let hash = hasher.getHash("UINT8ARRAY");
console.log(hasher.getHash("UINT8ARRAY"));
let verifierCoins = [];
for (let i = 0; i < ROUNDS / 8; i++) {
let v = hash[i];
for (let j = 0; j < 8; j++) {
verifierCoins.push(v & 1);
v >>= 1;
}
}
// Construct prover proofs
for (let coin of verifierCoins) {
if (coin === 1) {
// Reveal bijection and proof for zero
} else {
// Reveal proof for plaintext
}
}
}
window.proveRegions = proveRegions;

Binary file not shown.

View File

@ -530,11 +530,17 @@ On the other hand, \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires mult
This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 19 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-19} \approx 1.9 \times 10^{-6}$, but the size would reduce considerably to $\sim$770kB.
This is all in an ideal situation without compression or signatures: in the implementation presented, the serialisation of a ciphertext is larger than this, since it serialises to a string of the hexadecimal representation and includes a digital signature for authenticity. Compression shouldn't be expected to make a considerable difference, as the ciphertexts should appear approximately random.
This is all in an ideal situation without compression or signatures: in the implementation presented, the serialisation of a ciphertext is larger than this, since it serialises to a string of the hexadecimal representation and includes a digital signature for authenticity.
The size of the proof of zero communication is, in total, $3290 + 1744 + 2243$ characters, i.e $\sim$7.3kB. This is about 2-3 times larger than the ideal size. A solution to this is to use a more compact format, for example msgpack \cite{msgpack} (which also has native support for binary literals).
The size of the proof of zero communication is, in total, $3290 + 1744 + 2243$ characters, i.e $\sim$7.3kB. This is about 2-3 times larger than the ideal size. I discuss some solutions to this.
This only considers the network footprint. The other consideration is the memory footprint. The proof of zero requires auxiliary memory beyond the new values communicated. In particular, it must clone the ciphertext being proven, in order to prevent mutating the original ciphertext when multiplying by $g^{-m}$.
\textbf{Compression.} One solution is to use string compression. String compression can reduce the size considerably, as despite the ciphertexts being random, the hex digits only account for a small amount of the UTF-8 character space. LZ-String, a popular JavaScript string compression library, can reduce the size of a single hex-encoded ciphertext to about 35\% of its original size. The consideration
\textbf{Message format.} Another solution is to use a more compact message format, for example msgpack \cite{msgpack} (which also has native support for binary literals).
\textbf{Smaller key size.} The size of ciphertexts depends directly on the size of the key. Using a smaller key will reduce the size of the ciphertexts linearly.
This only considers the network footprint. The other consideration is the memory footprint. The proof of zero requires auxiliary memory beyond the new values communicated. In particular, it must clone the ciphertext being proven, in order to prevent mutating the original ciphertext when multiplying by $g^{-m}$. %todo
\subsubsection{Time complexity}
@ -588,6 +594,21 @@ The other proofs do not translate so trivially to this structure however. In fac
\textbf{Optimising language.} An optimising language may be able to reduce the time taken to encrypt. On the browser, this could involve using WASM as a way to execute compiled code within the browser, although WASM does not always outperform JavaScript.
\subsection{Complexity results}
All measurements taken on Brave 1.50.114 (Chromium 112.0.5615.49) 64-bit, using a Ryzen 5 3600 CPU.
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Modulus size & Na\"ive encrypt & Jacobi encrypt \\\hline
$n = 1024$ & cell5 & 4ms \\
$n = 2048$ & cell8 & 22ms \\
$n = 4096$ & cell8 & 128ms \\
\hline
\end{tabular}
\end{center}
\subsection{Quantum resistance}
The security of Paillier relies upon the difficulty of factoring large numbers \cite{paillier1999public}. Therefore, it is vulnerable to the same quantum threat as RSA is, which is described by \cite{shor_1997}. Alternative homomorphic encryption schemes are available, which are widely believed to be quantum-resistant, as they are based on lattice methods (e.g, \cite{fhe}).