update copyright notice on lzstring

This commit is contained in:
2023-05-02 14:39:05 +01:00
parent b59ced8fa6
commit e53a4405bc
4 changed files with 25 additions and 321 deletions

Binary file not shown.

View File

@ -878,7 +878,7 @@ Validating $E(m)$ is done with the proof of zero. Then it remains to prove that
\end{enumerate}
\end{protocol}
The downside of this proof over the BCDG proof \cite{bcdg1987} is that the time to perform and verify this proof grows linearly with $|m|$. However, in most cases $|m|$ should be "small": i.e, $|m| \leq 6$, as Risk unit counts rarely exceed 64 on a single region.
The downside of this proof over the BCDG proof \cite{bcdg1987} is that the time to perform and verify this proof grows linearly with $|m|$. However, in most cases $|m|$ should be "small": i.e, $|m| \leq 6$, as Risk unit counts rarely exceed 64 on a single region. In fact, to prevent revealing additional information from the bit length, this protocol in practice will pad out numbers to a minimum of 8 bits, which in our application makes this protocol constant time.
Range proof is used in points (3), (4), and (5). In (3), this is to convince other players that the number of units is sufficient for the action. In (4), this is to show that the region is not totally depleted. In (5), this is to ensure the number of units being fortified is less than the strength of the region. All of these are performed using \hyperref[protocol4]{Protocol~\ref*{protocol4}} and by using the additive homomorphic property to subtract the lower range from $m$ first.
@ -1093,7 +1093,7 @@ The interactive proof of zero uses two Paillier ciphertexts (each size $2|n|$),
On the other hand, the non-interactive variant needs not communicate the challenge (as it is computed as a function of other variables). So the non-interactive proof size is $5|n|$.
The non-interactive \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires multiple rounds. Assume that we use 48 rounds: this provides a good level of soundness, with a cheat probability of $\left(\frac{1}{2}\right)^{-48} \approx 3.6 \times 10^{-15}$. Additionally, assume that there are five regions to verify. Each prover round then requires five Paillier ciphertexts, and each verifier round five non-interactive proofs of zero plus some negligible amount of additional storage for the bijection.
This results in a proof size of $(10|n| + 10|n|) \times 48 = 960|n|$. For key size $|n| = 2048$, this is $240kB$. This is a fairly reasonable size for memory and network, but risks exceeding what can be placed within a processor's cache, leading to potential slowdown during verification.
This results in a proof size of $(10|n| + 10|n|) \times 48 = 960|n|$. For key size $|n| = 2048$, this is 240kB. This is a fairly reasonable size for memory and network, but risks exceeding what can be placed within a processor's cache, leading to potential slowdown during verification.
This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 24 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-24} \approx 6.0 \times 10^{-8}$, but the size would reduce by approximately half.
@ -1103,7 +1103,7 @@ Each of these calculations is in an ideal situation without compression or signa
\textbf{Message format.} Another solution is to use a more compact message format, for example msgpack \cite{msgpack} (which also has native support for binary literals).
\textbf{Smaller key size.} The size of ciphertexts depends directly on the size of the key. Using a smaller key will reduce the size of the ciphertexts linearly.
\textbf{Smaller key size.} The size of ciphertexts depends directly on the size of the key. Using a shorter key will reduce the size of the ciphertexts linearly.
\subsection{Time complexity}
@ -1293,7 +1293,7 @@ Using a language that can interact with the operating system would have further
The P2P implementation requires more processing power and more bandwidth on each peer than a client-server implementation would. This is the main limitation of the P2P implementation. The program ran in a reasonable time, using a reasonable amount of resources on the computers I had access to, but these are not representative of the majority of computers in use today. Using greater processing power increases power consumption, which is undesirable. In a client-server implementation, the power consumption should be lower than the P2P implementation presented as no processing time is spent validating proofs or using the Paillier cryptosystem, which is less efficient than the hybrid cryptosystems used in standard online communication.
\emph{Final word count: 9,088}
\emph{Final word count: 9,166}
\bibliography{Dissertation}