From 4cf29b0c3beacafd8565ca5461381f53832688ed Mon Sep 17 00:00:00 2001 From: Dennis Brentjes Date: Tue, 4 Sep 2018 19:28:20 +0200 Subject: Applied Colins feedback. --- content/conclusion.tex | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'content/conclusion.tex') diff --git a/content/conclusion.tex b/content/conclusion.tex index befeb8d..ea73c04 100644 --- a/content/conclusion.tex +++ b/content/conclusion.tex @@ -1,14 +1,14 @@ -\section{conclusion} +\section{Conclusion} \label{sec:conclusion} -The big picture shows using elliptic curve for \cmix can be very beneficial. Especially in the realtime phase of the protocol. And I've shown that there is room for optimization of the the protocol implementation itself. These optimizations could even make it perform better in the precomputation phase as well. +Using elliptic curve for \cmix{} can be very beneficial overall. Especially in the realtime phase of the protocol. This benchmark has shown that there is room for optimization of the protocol implementation itself. These optimizations could even make it perform better in the precomputation phase as well. -So in general using elliptic curve for \cmix shows promise. However there is more research to be done. Writing optimized backends for both algorithms and rerunning the tests to see if the differences still hold, is one of them. This is one of the fundamental things that need to be checked in further research. The goal of this research was feasibility, and this research shows just that. Now somebody with knowledge of writing fast and constant time cryptography implementations can pickup the topic of writing specialized backends and retest the algorithm. +So in general using elliptic curve for \cmix{} is promising. However there is more research to be done. For example, writing optimized backends for both algorithms and rerunning the tests to see if the differences still hold. This is one of the fundamental things that need to be checked in future research. The goal of this research was feasibility, this research shows just that. Hopefully this will be picked up to fully exploit the opportunities offered by elliptic curve \elgamal{}. -Another point to be taken into consideration is that this is the happy flow of the protocol. Checks like the tagging attack mitigation talked about in section \ref{sec:tagging} are implemented in a ``null'' fashion. Meaning the functions are in place and are being called but the implementation just returns a ``null'' byte as hash of the things it supposed to hash. Therefore the hash checking code is not in place. This was a deliberate choice, as these checks would not meaningfully affect the timings. The hashing algorithm scales linearly with the input size and would be the same over the 2 protocols. and checking it would also scale linearly with the input. Therefore it would be a constant time addition. In fact it would only pollute the timing results of the other cryptographic operations. However the protocol therefore needs some work to incorporate the hash checking where necessary. Therefore complying to the protocol standard. +Another point to be taken into consideration is that this is the happy flow of the protocol. Checks like the tagging attack mitigation talked about in section \ref{sec:tagging} are implemented in a ``null'' fashion. Meaning the functions are in place and are being called but the implementation just returns a ``null'' byte as hash of the things it supposed to hash. Therefore the hash checking code is not in place. This was a deliberate choice, as these checks would not meaningfully affect the timings. The hashing algorithm scales linearly with the input size and would be the same over the two protocols. Checking it would also scale linearly with the input. Therefore it would result in a constant time addition. In fact it would only pollute the timing results of the other cryptographic operations. However for the \cmix{} library to be used a reference implementation this addition needs to be implemented. -Another interesting avenue to take is to simulate real network traffic. There are frameworks\cite{zhang2015survey} to this end but none of the more popular and established ones work on the application layer. And to adapt the current framework to work on a network level or to convert route network traffic over the application layer is too much work to be in scope for this research. +Another interesting research direction to take is to simulate real network traffic. There are frameworks\cite{zhang2015survey} that do this, but none of the more popular and established ones work on the application layer. The work needed to adapt the current framework to work on a network level or to route the network traffic over the application layer is too much work to be in scope of this study. -And finally, there is still room to run this benchmark on separate machines. Having a server dedicated for each of the nodes and have 500 separate clients run the protocol. The benchmark framework is capable of this type of setup. All the communication is done over $TCP + SSL$ for the node communication and $TCP$ for the communication to the statistics collection daemon. But again writing additional tooling to distribute the executables and automation scripts would just take too long for the current research. +Finally, there is still room for future research by running this benchmark on separate machines. Having a server dedicated for each of the nodes and have 500 separate clients run the protocol. The benchmark framework can support such a setup. All the communication between nodes and clients is done over $TCP$ sockets with $SSL$ and the communication with the statistics daemon is done over $TCP$. Unfortunately writing the additional tooling to facilitate the deployment of all client and nodes on separate machines is out of scope for this paper. \newpage -- cgit v1.2.3-70-g09d2