diff options
| author | Dennis Brentjes <d.brentjes@gmail.com> | 2017-06-05 09:45:31 +0200 |
|---|---|---|
| committer | Dennis Brentjes <d.brentjes@gmail.com> | 2017-06-05 09:45:31 +0200 |
| commit | 5482f6b544fa91273ec983892681b6c67e59e825 (patch) | |
| tree | d2a1de44153deef445508249eceb807cafa518a0 /content/results.tex | |
| parent | 33483109b741824e163210acfda07dfa96876cc9 (diff) | |
| download | thesis-5482f6b544fa91273ec983892681b6c67e59e825.tar.gz thesis-5482f6b544fa91273ec983892681b6c67e59e825.tar.bz2 thesis-5482f6b544fa91273ec983892681b6c67e59e825.zip | |
Minor fixes for readability.
Diffstat (limited to 'content/results.tex')
| -rw-r--r-- | content/results.tex | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/content/results.tex b/content/results.tex index 36c14b7..12db128 100644 --- a/content/results.tex +++ b/content/results.tex @@ -6,11 +6,11 @@ Network latency is off course negligible because all the participants are runnin The reason behind running 3 nodes is simple. There are subtle distinctions between what nodes do, depending on the position in the network. The first node needs to aggregate messages and initiate the mix when enough messages have been received. The last node needs to do additional calculations to prevent the tagging attack mentioned in section \ref{sec:tagging}. Additionally the last node needs to decrypt the final message en send it to its destination. So the minimal test case should contain 3 nodes. 1 first, 1 middle and 1 last node. I don't expect to see much difference between these nodes with the exception of the ``RealPost'' step as the last node needs to decrypt the ciphertext, and prepare plaintext buffers to send out to the clients. -The reasoning behind running 500 clients is 2-fold. In the original \cmix paper \cite{TODO} The largest test was with 500 clients. So I wanted to mimic that. The second reason is that it still feasible to do 500 clients using a single pc with 12GB or RAM. We could still increase the number of clients by about 100 but running 500 of them gives us large enough timings that we don't need to worry about the timer resolution of the used CPU timers. And not running the extra 100 clients just gives us some headroom. +The reasoning behind running 500 clients is 2-fold. In the original \cmix paper \cite{cMix} The largest test was with 500 clients. So I wanted to mimic that. The second reason is that it still feasible to do 500 clients using a single pc with 12GB or RAM. We could still increase the number of clients by about 100 but running 500 of them gives us large enough timings that we don't need to worry about the timer resolution of the used CPU timers. And not running the extra 100 clients just gives us some headroom when other applications needs some extra ram. For the timings I used the \emph{boost::timer::cpu\_timer}\cite{BoostCpuTimer} which has a timer resolution of $10000000ns$ for both user and system clocks on a Linux environment. This is why all the results are accurate to one-hundredth of a second. The timings used are the so called ``User'' timings. This eliminates the time spend context switching, which gives us slightly more accurate results. The system and wall times are also recorded, but just filtered out in the results table as they are not relevant. -So for gathering results I created a program called statsd, it is included in the repository. The program receives timer snapshots over TCP. So each node sends a snapshot just before they start working on a phase of the \cmix algorithm. After we are done with computational work but before sending the data to the next node another snapshot of the clock state is send to the statsd. So the results are purely the computation of that \cmix phase. With some additional conversions to the wire format, but not the overhead of sending the message over the socket. This is done just after the \cmix operation complete courtesy of the implicit strand of the boost::asio asynchronous socket operations. +So for gathering results I created a program called statsd, it is included in the repository. The program receives timer snapshots over TCP. So each node sends a snapshot just before they start working on a phase of the \cmix algorithm. After we are done with computational work but before sending the data to the next node another snapshot of the clock state is send to the statsd. So the results are purely about the computation of that \cmix phase. With some additional conversions to the wire format of the timer snapshots, but not the overhead of sending the message over the socket. This is done just after the \cmix operation complete courtesy of the implicit ``strand'' of the boost::asio asynchronous socket operations. Gathering the results over TCP with a separate daemon enables people to run this same benchmark over separate servers. Enabling some nice test vectors as you can control network congestion and packet loss. |
