diff options
Diffstat (limited to 'content/discussion.tex')
| -rw-r--r-- | content/discussion.tex | 36 |
1 files changed, 36 insertions, 0 deletions
diff --git a/content/discussion.tex b/content/discussion.tex new file mode 100644 index 0000000..dc96914 --- /dev/null +++ b/content/discussion.tex @@ -0,0 +1,36 @@ +\section{Discussion} + +\newcommand{\ec}[0]{\emph{ec}\xspace} + +\newcommand{\mg}[0]{\emph{mg}\xspace} + +So lets first talk about what our expectations are. The size of the messages being sent are different. The ed25519 implementation can send messages up to $31$ bytes. The 2048 bit multiplicative group can send up to $255$ bytes of data. This implementation uses an prefix to the message data that is the destination id. This takes up $20$ bytes of each message, as it is the SHA1 hash of the public key of the receiver. this means that the ed25519 implementation only sends $11$ bytes of payload versus $235$ bytes of payload for the multiplicative group implementation. + +However there are ways around this, by doing multiple mixes of the ed25519 ein one single cMix run, which in turn can be trivially parallelized. The effective payload of the ed25519 algorithm would become $228$ bytes versus $235$ bytes of the multiplicative group. This is why I will consider the payload difference between the multiplicative group and the ed25519 implementation to a factor of $8$. + +\subsection{precomputation} + +If this is the case we would hope that the ed25519 implementation is atleast 8 times faster than the multiplicative group. Unfortunately this is not the case for the precomputation steps of the algorithm. + +\subsubsection{prepre} + +First of all this step does not seems to differ between nodes, all \ec nodes spend around 3.45 seconds and the \mg nodes spend about 17.87 seconds. + +For this step it could be caused by the random generation that takes place during this step. it's true that smaller random numbers have to be generated by the \ec nodes, which should give an advantage. However there might be a flat overhead in generating random numbers which would the same for both $256$ and $2048$ bit groups. If this flat overhead is large enough, let's say 1.5 seconds for 1000 random numbers, we get much closer to the x8 that we hope to achieve. However this seems unlikely as we are using the ``User'' cpu timings which should not include any of the non user space time spend waiting for instance on I/O. + +It is more likely that in this instance the \ec operations take longer than their \mg counterpart. The ec operations are very unoptimized and I had to introduce a couple of inversions to calculate the affine coordinates of a point. Because the api would not allow me to find the $x$ coordinate for a given point y in an easily accessible way. Which means there is room for optimization, to make this step even faster. + +\subsubsection{premix} + +The values for the premix step are very close to our ideal $8 x$ ratio, around $7 x$ times faster. The most likely cause for it being slightly slower than $8 x$ is the unnecessary inversions again. + +\subsubsection{prepost} + +So in the postprocessing step of the precomputation phase is the first actual timesaver for \ec. This is most likely due to the fact that the \mg algorithm now has to calculate inverses of groupelements. As we need to calculate our decryption shares. This is a much slower process for \mg than it is for \ec and therefore this is around $16 x$ faster + +\subsubsection{realpre} + +Now here is the real time saver in the \ec vs \mg benchmark. + +where \ec only takes $0.3$ seconds to complete this step, \mg takes on average $> 22$ seconds. in this case again, both algorithms need to calculate the inverse group elements. Which is faster in \ec than in \mg. + |
