summaryrefslogtreecommitdiff
path: root/content/results.tex
diff options
context:
space:
mode:
authorDennis Brentjes <dennis@brentj.es>2018-08-18 14:14:55 +0200
committerDennis Brentjes <dennis@brentj.es>2018-09-02 21:56:20 +0200
commit1e316c9a7437580f499453cdafbb0c7433a46b88 (patch)
tree918079a02069294d7043412280e95a003de464f0 /content/results.tex
parent23968a760efa6e03e8d47fbff108ec5aae010fe3 (diff)
downloadthesis-1e316c9a7437580f499453cdafbb0c7433a46b88.tar.gz
thesis-1e316c9a7437580f499453cdafbb0c7433a46b88.tar.bz2
thesis-1e316c9a7437580f499453cdafbb0c7433a46b88.zip
Processes review comments.
Diffstat (limited to 'content/results.tex')
-rw-r--r--content/results.tex18
1 files changed, 14 insertions, 4 deletions
diff --git a/content/results.tex b/content/results.tex
index 002b70f..9f81d2d 100644
--- a/content/results.tex
+++ b/content/results.tex
@@ -1,12 +1,22 @@
\section{Results}
\label{sec:results}
-So the raw results presented in appendix \ref{app-tables} were obtained by running 3 nodes and 500 clients on the same computer. The clients and nodes operated in the way you would normally see a \cmix setup. All connections, either from node to node or client to node, are TCP connections encrypted using TLS.
+\newcommand{\ec}[0]{\emph{ec}\xspace}
-Network latency is off course negligible because all the participants are running on the same computer but the goal is not to measure network latency. Rather we want to know if there is a benefit in using elliptic curve as apposed to multiplicative group ElGamal.
+\newcommand{\mg}[0]{\emph{mg}\xspace}
+
+In this section \ec and \mg will refer to the 2 different implementations that we compare. They stand for elliptic curve and multiplicative group.
+
+So the raw results, which can be found in the cmix repository \ref{app:code}, were obtained by running 3 nodes and 500 clients on the same computer. The clients and nodes operated in the way you would normally see a \cmix setup. All connections, either from node to node or client to node, are TCP connections encrypted using TLS. Each one of these 500 clients prepare a message of 248 bytes for \ec or a message of 256 bytes for \mg and send it to the first node of the network. This is achieved by either using a 2048 bit group for \mg or by using 31 bytes of the ed25519 group element and doing 8 mixes per run to get up to 248 bytes. The timings in the table correspond with the average of 100 runs and the standard deviation of that average for each step in the protocol. For example $prepre$ stands for $precomputation precomputation$ phase and $realpost$ stands for $realtime postcomputation$ phase.
+
+Note that these separate runs of the ed25519 protocol can trivially be parallelized, possibly making it even more interesting by comparison. But as we are interested in a straight up comparison this implementation does not parallelize multiple mixes.
+
+This implementation uses a prefix to the message data that is the destination id. This takes up $20$ bytes of each message, as it is the SHA1 hash of the public key of the receiver. So the payload would become $236$ and $228$ bytes for \mg and \ec respectively.
+
+Network latency is negligible because all the participants are running on the same computer but the goal is not to measure network latency. Rather we want to know if there is a benefit in using elliptic curve as apposed to multiplicative group ElGamal.
The reason behind running 3 nodes is simple. There are subtle distinctions between what nodes do, depending on the position in the network. The first node needs to aggregate messages and initiate the mix when enough messages have been received. The last node needs to do additional calculations to prevent the tagging attack mentioned in section \ref{sec:tagging}. Additionally the last node needs to decrypt the final message en send it to its destination. So the minimal test case should contain 3 nodes. 1 first, 1 middle and 1 last node. I don't expect to see much difference between these nodes with the exception of the ``RealPost'' step as the last node needs to decrypt the ciphertext, and prepare plaintext buffers to send out to the clients.
-The reasoning behind running 500 clients is 2-fold. In the original \cmix paper \cite{cMix} The largest test was with 500 clients. So I wanted to mimic that. The second reason is that it still feasible to do 500 clients using a single pc with 12GB or RAM. We could still increase the number of clients by about 100 but running 500 of them gives us large enough timings that we don't need to worry about the timer resolution of the used CPU timers. And not running the extra 100 clients just gives us some headroom when other applications needs some extra ram.
+In this benchmark we are running 500 clients per mix because of 2 reasons. In the original \cmix paper \cite{cMix} The largest test was with 500 clients. So I wanted to mimic that. The second reason is that it still feasible to do 500 clients using a single pc with 12GB or RAM. We could still increase the number of clients by about 100 but running 500 of them gives us large enough timings that we don't need to worry about the timer resolution of the used CPU timers. And not running the extra 100 clients just gives us some headroom when other applications needs some extra ram.
For the timings I used the \emph{boost::timer::cpu\_timer}\cite{BoostCpuTimer} which has a timer resolution of $10000000ns$ for both user and system clocks on a Linux environment. This is why all the results are accurate to one-hundredth of a second. The timings used are the so called ``User'' timings. This eliminates the time spend context switching, which gives us slightly more accurate results. The system and wall times are also recorded, but just filtered out in the results table as they are not relevant.
@@ -14,7 +24,7 @@ So for gathering results I created a program called statsd, it is included in th
Gathering the results over TCP with a separate daemon enables people to run this same benchmark over separate servers. Enabling some nice test vectors as you can control network congestion and packet loss.
-\subsection{summary of the results}
+\subsection{Summary of the results}
The following results were gathered with the pc specs as listed in appendix \ref{app-specs}. The optimization specific flags that were used are listed in appendix \ref{app-ccopts}.