This commit is contained in:
jude 2023-04-23 17:23:42 +01:00
parent 07269df66d
commit 88cf76f815
3 changed files with 50 additions and 40 deletions

View File

@ -4,7 +4,7 @@ import { RsaPubKey } from "../crypto/rsa.js";
import { PaillierPubKey, ReadOnlyCiphertext } from "../crypto/paillier.js"; import { PaillierPubKey, ReadOnlyCiphertext } from "../crypto/paillier.js";
import { Region } from "./map.js"; import { Region } from "./map.js";
import { showDefenseDom } from "./dom.js"; import { showDefenseDom } from "./dom.js";
import { proveRegions, verifyRegions } from "./proofs.js"; import { proveRange, proveRegions, verifyRegions } from "./proofs.js";
// Timeout to consider a player disconnected // Timeout to consider a player disconnected
const TIMEOUT = 30_000; const TIMEOUT = 30_000;
@ -306,10 +306,9 @@ export class Player {
// Handle region loss // Handle region loss
} else { } else {
// Prove we still control the region // Prove we still control the region
let proof = proveRange(defender.strength.cipherText, 2n ** 32n);
} }
} } else if (this === game.us) {
if (this === game.us) {
if (defender.strength.assumedStrength === 0n) { if (defender.strength.assumedStrength === 0n) {
// Handle region gain // Handle region gain
defender.owner = this; defender.owner = this;
@ -318,9 +317,9 @@ export class Player {
defender.name defender.name
); );
} }
} } else {
await defender.resolveConflict(); await defender.resolveConflict();
}
// Reset the promises in case they attack again. // Reset the promises in case they attack again.
defender.owner.defenderPromise = null; defender.owner.defenderPromise = null;

Binary file not shown.

View File

@ -300,18 +300,19 @@ Another approach to the problem is to use set membership, which is a widely cons
The implementation provided uses WebSockets as the communication primitive. This is therefore a centralised implementation. However, no verification occurs in the server code, which instead simply "echoes" messages received to all connected clients. The implementation provided uses WebSockets as the communication primitive. This is therefore a centralised implementation. However, no verification occurs in the server code, which instead simply "echoes" messages received to all connected clients.
Despite this approach being centralised, it does emulate a fully peer-to-peer environment, and has notable benefits: \begin{itemize} Despite this approach being centralised, it does emulate a fully peer-to-peer environment, and has notable benefits: \begin{itemize}
\item It is faster to develop, use, and test than using a physical system such as mail; \item There is no need for hole-punching or port-forwarding.
\item There is no need for hole-punching or port-forwarding;
\item WebSockets are highly flexible in how data is structured and interpreted. \item WebSockets are highly flexible in how data is structured and interpreted.
\end{itemize} \end{itemize}
In particular, the final point allows for the use of purely JSON messages, which are readily parsed and processed by the client-side JavaScript. In particular, the final point allows for the use of purely JSON messages, which are readily parsed and processed by the client-side JavaScript.
The game is broken down into three main stages, each of which handles events in a different way. These are shown below.
\begin{landscape}\begin{tikzpicture}[every node/.style={anchor=north west}] \begin{landscape}\begin{tikzpicture}[every node/.style={anchor=north west}]
% Create outlines % Create outlines
\node[ \node[
rectangle, rectangle,
dashed, dotted,
draw, draw,
minimum width=0.5\hsize-4pt, minimum width=0.5\hsize-4pt,
minimum height=0.5\textheight-4pt, minimum height=0.5\textheight-4pt,
@ -322,7 +323,7 @@ In particular, the final point allows for the use of purely JSON messages, which
\node[ \node[
rectangle, rectangle,
dashed, dotted,
draw, draw,
minimum width=0.5\hsize-4pt, minimum width=0.5\hsize-4pt,
minimum height=0.5\textheight-4pt, minimum height=0.5\textheight-4pt,
@ -333,7 +334,7 @@ In particular, the final point allows for the use of purely JSON messages, which
\node[ \node[
rectangle, rectangle,
dashed, dotted,
draw, draw,
minimum width=0.5\hsize-4pt, minimum width=0.5\hsize-4pt,
minimum height=\textheight-2pt, minimum height=\textheight-2pt,
@ -375,6 +376,16 @@ In particular, the final point allows for the use of purely JSON messages, which
% Player ready handling % Player ready handling
\node[draw=blue!50,rectangle,very thick,rounded corners=0.1mm,anchor=north] (Ready) at (170pt, 80pt) {Player becomes ready}; \node[draw=blue!50,rectangle,very thick,rounded corners=0.1mm,anchor=north] (Ready) at (170pt, 80pt) {Player becomes ready};
\node[draw=black!50,rectangle,fill=white,very thick,rounded corners=0.1mm,anchor=north] (MoveStage1) at (170pt, 10pt) {Update game stage};
\node[draw=green!50,rectangle,very thick,rounded corners=0.1mm,anchor=north] (Random1) at (170pt, -22pt) {Decide first player};
\draw[very thick,dashed,->] (Ready)-- node[right] {All players ready} ++(MoveStage1);
\draw[very thick,->] (MoveStage1) -- (Random1);
% Player connect handling
\node[draw=blue!50,rectangle,very thick,rounded corners=0.1mm,anchor=north] (Act1) at (56pt, -50pt) {Player acts};
\end{tikzpicture}\end{landscape} \end{tikzpicture}\end{landscape}
\subsection{Message structure} \subsection{Message structure}
@ -510,7 +521,7 @@ A large part of Risk involves random behaviour dictated by rolling some number o
This is achieved through bit-commitment and properties of $\mathbb{Z}_n$. The protocol for two peers is as follows, and generalises to $n$ peers. This is achieved through bit-commitment and properties of $\mathbb{Z}_n$. The protocol for two peers is as follows, and generalises to $n$ peers.
\begin{protocol}[Shared random values] \begin{protocol}[Shared random values]\label{protocol2}
\begin{center} \begin{center}
\begin{tikzpicture}[ \begin{tikzpicture}[
every node/.append style={very thick,rounded corners=0.1mm} every node/.append style={very thick,rounded corners=0.1mm}
@ -617,8 +628,7 @@ The prover responds with the fields \texttt{conjecture: int} and \texttt{a: str}
The prover then waits on an event listener to respond to the verifier's challenge in a non-blocking way when received. The prover then waits on an event listener to respond to the verifier's challenge in a non-blocking way when received.
The verifier receives the message above, and responds with a random challenge selected by generating a cryptographically secure pseudorandom number of 2048 bits, and then dropping the LSB. Using 2047 bits guarantees that the challenge is smaller than $p$ or $q$, as is suggested in the original paper. %todo why? The verifier receives the message above, and responds with a random challenge selected by generating a cryptographically secure pseudorandom number of 2048 bits. The verifier then waits on an event listener to receive the prover's proof.
The verifier then waits on an event listener to receive the prover's proof.
Verifying the proof is a simple application of extended Euclidean algorithm to check coprimality, and a modular exponentiation and reduction to check the final equivalence. The ciphertext on the verifier's instance is then tagged with the proven plaintext (should the proof succeed). This tag is removed in the case that the ciphertext is updated. Verifying the proof is a simple application of extended Euclidean algorithm to check coprimality, and a modular exponentiation and reduction to check the final equivalence. The ciphertext on the verifier's instance is then tagged with the proven plaintext (should the proof succeed). This tag is removed in the case that the ciphertext is updated.
@ -636,17 +646,19 @@ Players should prove a number of properties of their game state to each other to
\item The number of units moved when fortifying. \item The number of units moved when fortifying.
\end{enumerate} \end{enumerate}
(2) and (4) are both covered by the proof above. (3) is okay between two players, as it is a subcase of (2). But in the case of more players, the availability of units should be proven. One way to achieve this is with a range proof. (2) and (4) are both covered by the proof above. (3) is okay between two players, as it is a subcase of (2). But in the case of more players, the availability of units should be proven. One way to achieve this is with a range proof. Similarly, (5) requires guarantees that the number of units moved is valid, which can be performed as a range proof.
\cite[Section~2]{bcdg1987} demonstrates a proof that some given ciphertext lies within an interval $[-\ell, 2\ell]$, where $\ell$ is some public value. This proof can easily be manipulated into a proof that a value lies within the interval $[n, 3\ell + n]$ from the additive homomorphic property. By selecting a sufficiently high $\ell$ and appropriate $n$, this proof is appropriate for proving to other players that the number of units being used in an attack is valid.
\subsection{Range proof} \subsection{Range proof}
\cite{bcdg1987}'s proof is a multi-round proof more similar in structure to the graph isomorphism proof presented in \cite{10.1145/116825.116852}. We select public parameter $\ell$ to be some sufficiently high value that a player's unit count should not exceed during play: an appropriate choice may be 1000. Select $n$ as the number of units that the player is defending with, or in the case of attacking, let $n$ be the number of units that the player is attacking with plus 1 (as is required by the rules of Risk). %todo \cite[Section~2]{bcdg1987} demonstrates a proof that an encryption of a plaintext in the interval $[0, \ell]$ lies within the interval $[-\ell, 2\ell]$, where $\ell$ is some well-known value. So, the soundness and completeness of this proof are not the same.
Through selection of specific private inputs, a prover can create a proof for a plaintext $m$ in the soundness interval and not the completeness interval. In this case, the proof is also not in zero-knowledge, as the verifier can infer more specific information on the value of $m$.
An alternative approach that is in zero-knowledge with acceptable soundness/completeness is to use a set membership proof for a set of all allowable values. This requires too much processing to be effective in this application however.
\subsection{Cheating with negative values} \subsection{Cheating with negative values}
Using just the additive homomorphic property to guarantee (1) opens up the ability for a player to cheat by using negative values. This is a severe issue, as potentially the cheat could be completely unnoticed even in the conclusion of the game. To overcome this, we need a new protocol that is still in zero-knowledge, but proves a different property of a player's move. Using just the additive homomorphic property to guarantee (1) opens up the ability for a player to cheat by using negative values. This is a severe issue, as potentially the cheat could be completely unnoticed even in the conclusion of the game. To overcome this, we want a new protocol that is still in zero-knowledge, but proves a different property of a player's move.
One consideration is to use a range proof as above. The full proof would then be the combination of a proof that the sum of all ciphertexts is 1, and the range of each ciphertext is as tight as possible, which is within the range $[0, 3]$. This is acceptable in the specific application, however we can achieve a better proof that is similar in operation to \cite{Boudot2000EfficientPT}. One consideration is to use a range proof as above. The full proof would then be the combination of a proof that the sum of all ciphertexts is 1, and the range of each ciphertext is as tight as possible, which is within the range $[0, 3]$. This is acceptable in the specific application, however we can achieve a better proof that is similar in operation to \cite{Boudot2000EfficientPT}.
@ -691,7 +703,7 @@ Additionally, we can consider this protocol perfect zero-knowledge.
\item Choose random $(r_i^*)'$ from the random oracle. \item Choose random $(r_i^*)'$ from the random oracle.
\item Encrypt under $P$'s public-key. \item Encrypt under $P$'s public-key.
\item Verifier picks $c$ as before. \item Verifier picks $c$ as before.
\item Perform proofs of zero, which are also perfect zero-knowledge \cite{damgard2003}. \item Perform proofs of zero, which are also perfect zero-knowledge under the honest-verifier assumption \cite[Lemma~3]{damgard2003}.
\end{enumerate} \end{enumerate}
This gives $T^*$ such that $T^*(S) = T(P, V, S)$, and the output distributions are identical. Hence, this proof is perfect zero-knowledge under random oracle model. This gives $T^*$ such that $T^*(S) = T(P, V, S)$, and the output distributions are identical. Hence, this proof is perfect zero-knowledge under random oracle model.
@ -709,7 +721,7 @@ Firstly, the set being proven on changes form to $k, -k, 0, \dots, 0$, for a mov
It is preferred that these proofs can be performed with only a few communications: this issue is particularly prevalent here as this protocol requires multiple rounds to complete. The independence of each round on the next is a beneficial property, as it means the proof can be performed in parallel, so the prover transmits \textit{all} of their $\psi$'s, then the verifier transmits all of their challenges. However, still is the issue of performing proofs of zero. It is preferred that these proofs can be performed with only a few communications: this issue is particularly prevalent here as this protocol requires multiple rounds to complete. The independence of each round on the next is a beneficial property, as it means the proof can be performed in parallel, so the prover transmits \textit{all} of their $\psi$'s, then the verifier transmits all of their challenges. However, still is the issue of performing proofs of zero.
We can apply the Fiat-Shamir heuristic to make proofs of zero non-interactive \cite{fiatshamir}. In place of a random oracle, we use a cryptographic hash function. We take the hash of some public parameters to prevent cheating by searching for some values that hash in a preferable manner. In this case, selecting $e = H(g, m, a)$ is a valid choice. To get a hash of desired length, an extendable output function such as SHAKE256 could be used \cite{FIPS202}. The library jsSHA \cite{jssha} provides an implementation of SHAKE256 that works within a browser. We can apply the Fiat-Shamir heuristic to make proofs of zero non-interactive \cite{fiatshamir}. In place of a random oracle, we use a cryptographic hash function. We take the hash of some public parameters to prevent cheating by searching for some values that hash in a preferable manner. In this case, selecting $e = H(g, m, a)$ is a valid choice. To get a hash of desired length, an extendable output function such as SHAKE256 can be used \cite{FIPS202}. The library jsSHA \cite{jssha} provides an implementation of SHAKE256 that works within a browser.
\chapter{Review} \chapter{Review}
@ -735,16 +747,16 @@ The proof of zero is honest-verifier \cite[Section~5.2]{damgard2003}. However, a
\subsection{Storage complexity} \subsection{Storage complexity}
In this section, let $N = |n|$. This is likely one of 1024, 2048, or 4096; depending on the size of the primes used to form the modulus. Let $n$ be the Paillier modulus.
Paillier ciphertexts are constant size, each $2N$ in size (as they are taken modulo $n^2$). This is small enough for the memory and network limitations of today. Paillier ciphertexts are constant size, each $2|n|$ in size (as they are taken modulo $n^2$). This is small enough for the memory and network limitations of today.
The interactive proof of zero uses two Paillier ciphertexts (each size $2N$), a challenge of size $N$, and a proof statement of size $N$. In total, this is a constant size of $6N$. The interactive proof of zero uses two Paillier ciphertexts (each size $2|n|$), a challenge of size $|n|$, and a proof statement of size $|n|$. In total, this is a constant size of $6|n|$.
On the other hand, the non-interactive variant needs not communicate the challenge (as it is computed as a function of other variables). So the non-interactive proof size is $5N$. On the other hand, the non-interactive variant needs not communicate the challenge (as it is computed as a function of other variables). So the non-interactive proof size is $5|n|$.
The non-interactive \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires multiple rounds. Assume that we use 48 rounds: this provides a good level of soundness, with a cheat probability of $\left(\frac{1}{2}\right)^{-48} \approx 3.6 \times 10^{-15}$. Additionally, assume that there are five regions to verify. Each prover round then requires five Paillier ciphertexts, and each verifier round five non-interactive proofs of zero plus some negligible amount of additional storage for the bijection. The non-interactive \hyperref[protocol1]{Protocol~\ref*{protocol1}} requires multiple rounds. Assume that we use 48 rounds: this provides a good level of soundness, with a cheat probability of $\left(\frac{1}{2}\right)^{-48} \approx 3.6 \times 10^{-15}$. Additionally, assume that there are five regions to verify. Each prover round then requires five Paillier ciphertexts, and each verifier round five non-interactive proofs of zero plus some negligible amount of additional storage for the bijection.
This results in a proof size of $(10N + 10N) \times 48 = 960N$. For key size $N = 2048$, this is $240kB$. This is a fairly reasonable size for memory and network, but this value may exceed what can be placed within a processor's cache, leading to potential slowdown during verification. This results in a proof size of $(10|n| + 10|n|) \times 48 = 960|n|$. For key size $|n| = 2048$, this is $240kB$. This is a fairly reasonable size for memory and network, but this value may exceed what can be placed within a processor's cache, leading to potential slowdown during verification.
This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 24 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-24} \approx 6.0 \times 10^{-8}$, but the size would reduce by approximately half. This could be overcome by reducing the number of rounds, which comes at the cost of increasing the probability of cheating. In a protocol designed to only facilitate a single game session, this may be acceptable to the parties involved. For example, reducing the number of rounds to 24 will increase the chance of cheating to $\left(\frac{1}{2}\right)^{-24} \approx 6.0 \times 10^{-8}$, but the size would reduce by approximately half.
@ -864,24 +876,23 @@ All measurements were taken on Brave 1.50.114 (Chromium 112.0.5615.49) 64-bit, u
\chapter{Wider application} \chapter{Wider application}
Peer-to-peer software is an area of software that has fallen somewhat out of interest in more recent years, as online service providers can afford to run their own centralised servers (although no doubt interest still exists: some users are preferring federated services over centralised services, such as Mastodon, Matrix, XMPP). Peer-to-peer software solutions have many benefits to end users: mainly being greater user freedom. I believe that the content presented here shows clear ways to extend peer-to-peer infrastructure, and reduce dependence on centralised services.
However, peer-to-peer solutions still have many benefits to end users: mainly being greater user freedom. I believe that the content presented here shows clear ways to expand peer-to-peer systems, and reduce dependence on centralised services.
I propose some ideas which could build off the content here. I propose some ideas which could build off the content here.
\subsection{Larger scale P2P games} \section{Larger scale P2P games}
Presented here was a basic implementation of a reduced rule-set version of the board game Risk. However, many other games exist that the same transformation could be applied to. Games of larger scale with a similar structure, such as Unciv, could benefit from peer-to-peer networking implemented in a similar manner. Many other games exist that the ideas presented could be applied to. Games of larger scale with a similar structure, such as Unciv, could benefit from peer-to-peer networking implemented in a similar manner. In particular, \hyperref[protocol1]{Protocol~\ref*{protocol2}} would form an intrinsic part of such games.
This is not without its downsides: I found that the complexity of P2P networking is far greater than a standard centralised model. This would be a considerable burden on the developers, and could hurt the performance of such a game. The time taken to process and verify proofs also makes this inapplicable to games that are real-time. The downsides of this are that the complexity of P2P networking is far greater than a standard centralised model. This would be a considerable burden on the developers, and could hurt the performance of such a game. The time taken to process and verify proofs also makes this inapplicable to games that are real-time.
\subsection{Decentralised social media} \section{Decentralised social media}
The schemes presented here could be applies to the concept of a decentralised social media platform. Such a platform may use zero-knowledge proofs as a way to allow for "private" profiles: the content of a profile may stay encrypted, but zero-knowledge proofs could be used as a way to allow certain users to view private content in a manner that allows for repudiation, and disallows one user from sharing private content to unauthorised users. The schemes presented here could be applies to the concept of a decentralised social media platform. Such a platform may use zero-knowledge proofs as a way to allow for "private" profiles: the content of a profile may stay encrypted, but zero-knowledge proofs could be used as a way to allow certain users to view private content in a manner that allows for repudiation, and disallows one user from sharing private content to unauthorised users.
The obvious issue is P2P data storage. Users could host their own platforms, but this tends to lead to low adoption due to complexity for normal people. IPFS is a P2P data storage protocol that could be considered. This poses an advantage that users can store their own data, if they have a large amount, but other users can mirror data effectively to protect against outages. The amount of storage can grow effectively as more users join the network. To store data, IPFS could be used. IPFS is a P2P data storage protocol. This poses an advantage that users can store their own data, if they have a large amount, but other users can mirror data to protect against outages or users going offline. The amount of effective storage would also grow as more users join the network.
\subsection{Handling of confidential data} \section{Handling of confidential data}
The ability to prove the contents of a dataset to a second party without guaranteeing authenticity to a third party is another potential application of the protocol presented. Handling of confidential data is a critical concern for pharmaceutical companies, where a data leak imposes serious legal and competitive consequences for the company. A second party does however need some guarantee that the data received is correct. Proofs are one way of achieving this, although other techniques such as keyed hashing may be more effective. The ability to prove the contents of a dataset to a second party without guaranteeing authenticity to a third party is another potential application of the protocol presented. Handling of confidential data is a critical concern for pharmaceutical companies, where a data leak imposes serious legal and competitive consequences for the company. A second party does however need some guarantee that the data received is correct. Proofs are one way of achieving this, although other techniques such as keyed hashing may be more effective.
@ -889,9 +900,9 @@ Another consideration in this domain is the use of homomorphic encryption scheme
\chapter{Limitations} \chapter{Limitations}
Finally, I present a summary of other limitations that I encountered. Finally, I present a summary of general limitations that I encountered.
\subsection{JavaScript} \section{JavaScript}
JavaScript was the incorrect choice of language for this project. Whilst the event-based methodology was useful, I believe overall that JavaScript made development much more difficult. JavaScript was the incorrect choice of language for this project. Whilst the event-based methodology was useful, I believe overall that JavaScript made development much more difficult.
@ -901,15 +912,15 @@ JavaScript's type system makes debugging difficult. It is somewhat obvious that
JavaScript is a re-entrant language: this means that the interpreter does not expose threads or parallelism to the developer, but it may still use threads under-the-hood and switch contexts to handle new events. This introduces the possibility of race conditions despite no explicit threading being used. The re-entrant nature is however beneficial to a degree, as it means that long-running code won't cause the WebSocket to close or block other communications from being processed. JavaScript is a re-entrant language: this means that the interpreter does not expose threads or parallelism to the developer, but it may still use threads under-the-hood and switch contexts to handle new events. This introduces the possibility of race conditions despite no explicit threading being used. The re-entrant nature is however beneficial to a degree, as it means that long-running code won't cause the WebSocket to close or block other communications from being processed.
\subsection{General programming} \section{General programming}
Peer-to-peer programming requires a lot more care than client-server programming. This makes development far slower and far more bug-prone. As a simple example, consider the action of taking a turn in Risk. In the peer-to-peer implementation presented, each separate peer must keep track of how far into a turn a player is, check if a certain action would end their turn (or if its invalid), contribute in verifying proofs, and contribute in generating randomness for dice rolls. In a client-server implementation, the server would be able to handle a turn by itself, and could then propagate the results to the other clients in a single predictable request. Peer-to-peer programming requires a lot more care than client-server programming. This makes development far slower and far more bug-prone. As a simple example, consider the action of taking a turn in Risk. In the peer-to-peer implementation presented, each separate peer must keep track of how far into a turn a player is, check if a certain action would end their turn (or if its invalid), contribute in verifying proofs, and contribute in generating randomness for dice rolls. In a client-server implementation, the server would be able to handle a turn by itself, and could then propagate the results to the other clients in a single predictable request.
The use of big integers leads to peculiar issues relating to signedness. This is in some ways a JavaScript issue, but would also be true in other languages. Taking modulo $n$ of a negative number tends to return a negative number, rather than a number within the range $[0, n]$. This leads to inconsistencies when calculating the GCD or finding Bezout coefficients. In particular, this became an issue when trying to validate proofs of zero, as the GCD returned $-1$ rather than $1$ in some cases. Resolving this simply required changing the update and encrypt functions to add the modulus until the representation of the ciphertext was signed correctly. Whilst the fix for this was simple, having to fix this in the first place is annoying, and using a non-numerical type (such as a byte stream) may resolve this in general. The use of big integers leads to peculiar issues relating to signedness. Taking modulo $n$ of a negative number tends to return a negative number, rather than a number within the range $[0, n]$. This leads to inconsistencies when calculating the GCD or finding Bezout coefficients. In particular, this became an issue when trying to validate proofs of zero, as the GCD returned $-1$ rather than $1$ in some cases. Resolving this simply required changing the update and encrypt functions to add the modulus until the representation of the ciphertext was signed correctly. Using a non-numerical type (such as a byte array) may resolve this issue in general.
\subsection{Resources} \section{Resources}
The peer-to-peer implementation requires more processing power and more bandwidth on each peer than a client-server implementation would. This is the main limitation of the peer-to-peer implementation. The program ran in a reasonable time, using a reasonable amount of resources on the computers I had access to, but these are not representative of the majority of people. Using greater processing power increases power consumption, which is definitely undesirable. In a client-server implementation, even with an extra computer, I predict that the power consumption should be lower than the peer-to-peer implementation presented. %todo justify The peer-to-peer implementation requires more processing power and more bandwidth on each peer than a client-server implementation would. This is the main limitation of the peer-to-peer implementation. The program ran in a reasonable time, using a reasonable amount of resources on the computers I had access to, but these are not representative of the majority of people. Using greater processing power increases power consumption, which is undesirable. In a client-server implementation, the power consumption should be lower than the peer-to-peer implementation presented as no processing time is spent validating proofs or using the Paillier cryptosystem, which is less efficient than the hybrid cryptosystems used in standard online communication.
\bibliography{Dissertation} \bibliography{Dissertation}