\section{Conclusion} \label{sec:conclusion} Using elliptic curve for \cmix{} can be very beneficial overall. Especially in the realtime phase of the protocol. This benchmark has shown that there is room for optimization of the protocol implementation itself. These optimizations could even make it perform better in the precomputation phase as well. So in general using elliptic curve for \cmix{} is promising. However there is more research to be done. For example, writing optimized backends for both algorithms and rerunning the tests to see if the differences still hold. This is one of the fundamental things that need to be checked in future research. The goal of this research was feasibility, this research shows just that. Hopefully this will be picked up to fully exploit the opportunities offered by elliptic curve \elgamal{}. Another point to be taken into consideration is that this is the happy flow of the protocol. Checks like the tagging attack mitigation talked about in section \ref{sec:tagging} are implemented in a ``null'' fashion. Meaning the functions are in place and are being called but the implementation just returns a ``null'' byte as hash of the things it supposed to hash. Therefore the hash checking code is not in place. This was a deliberate choice, as these checks would not meaningfully affect the timings. The hashing algorithm scales linearly with the input size and would be the same over the two protocols. Checking it would also scale linearly with the input. Therefore it would result in a constant time addition. In fact it would only pollute the timing results of the other cryptographic operations. However for the \cmix{} library to be used a reference implementation this addition needs to be implemented. Another interesting research direction to take is to simulate real network traffic. There are frameworks\cite{zhang2015survey} that do this, but none of the more popular and established ones work on the application layer. The work needed to adapt the current framework to work on a network level or to route the network traffic over the application layer is too much work to be in scope of this study. Finally, there is still room for future research by running this benchmark on separate machines. Having a server dedicated for each of the nodes and have 500 separate clients run the protocol. The benchmark framework can support such a setup. All the communication between nodes and clients is done over $TCP$ sockets with $SSL$ and the communication with the statistics daemon is done over $TCP$. Unfortunately writing the additional tooling to facilitate the deployment of all client and nodes on separate machines is out of scope for this paper. \newpage