\section{Discussion} \newcommand{\ec}[0]{\emph{ec}\xspace} \newcommand{\mg}[0]{\emph{mg}\xspace} In this section \ec and \mg will refer to the 2 different implementations that we compare. They stand for elliptic curve and multiplicative groups. So lets first talk about what our expectations are. The size of the messages being sent are different. The ed25519 implementation can send messages up to $31$ bytes. The 2048 bit multiplicative group can send up to $255$ bytes of data. This implementation uses an prefix to the message data that is the destination id. This takes up $20$ bytes of each message, as it is the SHA1 hash of the public key of the receiver. this means that the ed25519 implementation only sends $11$ bytes of payload versus $235$ bytes of payload for the multiplicative group implementation. However there are ways around this, by doing multiple mixes of the ed25519 ein one single cMix run, which in turn can be trivially parallelized. The effective payload of the ed25519 algorithm would become $228$ bytes versus $235$ bytes of the multiplicative group. This is why I will consider the payload difference between the multiplicative group and the ed25519 implementation to a factor of $8$. \subsection{precomputation} If this is the case we would hope that the ed25519 implementation is atleast 8 times faster than the multiplicative group. Unfortunately this is not the case for the precomputation steps of the algorithm. \subsubsection{prepre} First of all this step does not seems to differ between nodes, all \ec nodes spend around $3.67s$ and the \mg nodes spend about $19.19s$. Which is $5.23$ times faster, not the 8 times we would expect. This can easily be explained by the way one generates random curve points on an elliptic curve. You have to generate a random number and check, in the case of ed25519, if that number is a valid $y$ coordinate. For ed25519 about half of the possible $y$ values are valid, so the chance that a randomly generated number is valid will be $.5$. Because of this you need to, on average, generate and check twice as many coordinates as the actual number of random curve points you want \ref{eq:prerpe}. \begin{equation} \label{eq:prepre} \sum_{x=0}^{\infty} n \cdot \left( \frac{1}{2}\right)^x =\ 2n \end{equation} Therefore we only expect to be 4 times faster so comparing to that we are still slightly faster than the \mg counterpart, but \ec is slower on a fundamental level because of the extra calculations needed to generate random curve points. \subsubsection{premix} The values for the premix step are very close to our ideal 8 times ratio, $6.79$ times faster. There is however no fundamental explanation why it should be slower than expected. There is however one implementation detail that might have had it's influence here. When serializing elliptic curve points, there are several choices you can make about the format. You can send both affine coordinates. Only the defining affine coordinate, or the projective coordinates. I chose in this implementation to send both affine $x$ and $y$ coordinates. This forces each node to calculate the inverse of the internally used projective $Z$ coordinate. Which is relatively slow. The reason for this is that the raw affine $x$ and $y$ coordinate are easier to inspect and debug. Trying to be faster by only sending the defining affine coordinate would still require the inverse of $Z$ to be calculated so there is not much to gain there. On top of that there is no easy way to find the corresponding $x$ for any given $y$ in the \emph{libgcrypt} elliptic curve interface. There is time to gain if we switch from affine to projected coordinate system within the network format. it would mean sending more data but the conversions from network format to internal would be much faster. This would avoid the inversion of $Z$ and this would be the big time saver for many of the steps even the $prepre$ step we already discussed. My expectation is that it would make this step at least 8 times faster than the \mg implementation making it comparable in performance. \subsubsection{prepost} So here in the precomputaion-postcompuation phase the elliptic curve algorithm pays off. with $1.17s$ vs $20.16s$ a $17.26$ times speed increase. This is probably because the inversions in \mg are much more computational intensive as inversions in \ec. \subsubsection{realpre} Another timesaver compared to \mg. It's $81.90$ times faster. \subsubsection{realmix and realpost} These phases are similar. The \ec variant is relatively slow, but this might be related to the inversions that are incurred whenever the software serializes the elliptic curve points to the network format.