# Cryptanalysis: “trapdoored” prime numbers?

## 20 October 2016

A team of researchers, Joshua Fried and Nancy Nadia Heninger (University of Pennsylvania), Pierrick Gaudry (CNRS/University of Lorraine) and Emmanuel Thomé (CARAMBA team, Inria/University of Lorraine), has recently demonstrated that certain standards used in cryptography in order to secure exchanges and communication use prime numbers whose origin is unverifiable: at least 37% in HTTPS (of the million most visited Internet websites), and at least 13% on IPsec VPNs. This represents a real risk of spying on both individuals and economic interests.

### The use of prime numbers in international cryptographic standards

International cryptographic standards include prime numbers that are destined for large-scale use. These standards concern some of the operations carried out in cryptography, notably the Diffie-Hellman key exchange (invented in 1976), which is used almost universally: HTTPS, VPN in order to secure exchanges and communication.

A prime number does not define “a key” but rather “a context”. And if we manage to solve a difficult mathematical problem linked to this prime number, called a discrete logarithm problem , then all of the cryptography that relies on it is jeopardised. That means, for example, that each communication secured by Diffie-Hellman, when it uses this prime number p, can be decrypted by the person who has solved the discrete logarithm for p.
It is not a problem as long as the discrete logarithm problem for prime numberp remains within unattainable levels in terms of computation time. The right choice of prime number is necessary in order to do this. For “nothing-up-my-sleeve” prime numbers, without any particular properties, the scale of difficulty in terms of size is as follows:

– 768 bits: very easily attainable;
– 1024 bits: in all likelihood too easy, undoubtedly attainable with significant resources, and unfortunately still too widespread;
– 2048 bits: strong.

### High-risk prime numbers, equipped with a “trapdoor”

A team of researchers, Joshua Fried and Nancy Nadia Heninger (University of Pennsylvania), Pierrick Gaudry (CARAMBA team, CNRS/University of Lorraine) and Emmanuel Thomé (CARAMBA team, Inria/University of Lorraine), has recently demonstrated that not all prime numbers are the same. It all depends on the way in which they have been chosen.
This design method is not new, it dates back to 1992(!).  At the time, the method for creating a “trapdoored” prime number was the same as the one used today, but as the prime numbers were on the whole much smaller, it was possible to detect cheating and as a result limit the risk.
Twenty-five years on and we come to a very different conclusion. As things stand, cheating cannot be detected for larger prime numbers.

“In fact, there is a way to choose “trapdoored” prime numbers equipped with a “trapdoor” (trapdoored primes). A body responsible for writing a standard could therefore propose such a prime number for general use, and keep hold of this trapdoor for itself, giving it an advantage for  breaking everything!”

For an attacker who knows the trapdoor, the scale of difficulty would become:

•     768 bits: very easy;
• 1024 bits: easy (our calculation);
• 2048 bits: unattainable for the moment, but not as difficult as we would like to believe in order to protect sensitive information.

### A risk of spying on individuals as well as economic interests

In the standards, or in cryptography software, prime numbers exist whose origin is unknown.
Let us take the context of a VPN. If, believing that a 1024 bit “nothing-up-my-sleeve” prime number is being used and instead a “trapdoored” one is used, then all communication can be decrypted the owner of the trapdoor, who can “listen to” what is happening on the network cable.

Today, too many servers accept conversing on the basis of prime numbers whose origin is unverifiable: at least 37% in HTTPS (of the million most visited Internet websites), at least 13% on IPsec VPNs – that is a lot. Far too many.

### How can we protect ourselves?

In order to be sure that a prime number is honest, or “nothing-up-my-sleeve”, there are two solutions:

• take pi decimals or any number  above suspicion and, as it were, standardise “the next prime number”. This is done for certain standards, and these prime numbers are (fortunately) widely-used;
• “take figures at random in a verifiable manner”, in order to create a prime number from a “seed”. If the prime number and the seed  are published together, we can check that they match. These “verifiable random” methods are unfortunately only very rarely used, and instead we find, within the standards, primary numbers provided without any justification.

For both standards and software, we would need to rigorously stick to these two methods for generating prime numbers. This would imply, at least, the removal of one standard in particular, called RFC5114.  Furthermore, it would be necessary to raise the minimum requirement in terms of the size of prime numbers to 2048 bits.
Finally, the use of elliptic curves would be a good way to be free from this risk.