The High/low Entropy Rant for Cryptography

We had another discussion of entropy today.  In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data, a quick search of Wikipedia will tell you.  And you may well know, a lack of entropy can have a negative impact on performance and security. So the aforementioned discussion with the Data Security team led to my usual rant about the limitations of entropy and what it really measures.

Which starts here.

Good sources of randomness are essential in cryptography, and entropy is often used to quantify randomness. But although sources that are very random have high entropy, sources that have high entropy aren’t necessarily random. Here’s why.

The model used for (Shannon) entropy is that we have a source of random symbols. Every symbol from this source is generated according to some probability distribution function (pdf). We may not know the parameters of this pdf, but we can estimate them from lots of samples.

In particular, if we have a random variable X that takes on values x1, …, xn with probabilities p(x1), …, p(xn) respectively, then the entropy of X isCryptography This value is maximized when all of the probabilities are the same. If we have 2n different symbols, that maximum value will be n bits of entropy per symbol. That’s the theoretical maximum level of entropy that we can get.

Now note that these probabilities are not a function of any previous symbols. To be more precise, a sequence of symbols that we get from this source is independent and identically distributed (IID). And if we violate the IID assumption we can get some unexpected results.

For example, consider a source that generates the following sequence of bits: 010101010101…

If we collect lots of symbols from this source and try to estimate the entropy of the source from them, we get that p(0) and p(1) are both close to 0.5. And when we use those values to estimate the entropy of this source, we find that it’s very close to 1 bit/symbol, which is the absolute theoretical maximum level of entropy that you can get from a single symbol.

But these bits are clearly not random.

Because we’ve violated the IID assumption, we’re getting a meaningless result from this entropy estimate.

Similarly, the tool ent is often used to estimate the entropy. Ent happens to use single bytes for symbols, so if we create data that’s just a repetition of 0x00, 0x01, 0x02,…, 0xff over and over, ent will estimate the entropy of this source to be very close to the theoretical maximum of 8 bits/symbol. Here’a a screen shot of this done on a file that was just 1 million repetitions of the 0x00 to 0xff sequence.

Cryptography Entropy

Again, these observations are clearly not random, but this estimate tells us that they actually attain the theoretical maximum of 8 bits of entropy per byte.

So be careful when using entropy to measure how random a source of randomness is. Low entropy means that your source probably isn’t really random, but just because you are getting lots of entropy doesn’t mean that you’re getting lots of randomness. It takes more than high entropy to have a good source of randomness and that means that it takes more than a source of high entropy to make your cryptography secure.

End of rant.

 

About the Author
Luther Martin, Micro Focus Distinguished Technologist, is a frequent contributor to articles and blogs. Recent articles include Relax! Good encryption practices won’t affect app performance in TechBeacon Magazine, The Security of Cryptography and the Wisdom of Crowds, in the ISSA Journal, and Quantum computers breaking encryption? Don’t panic! in the voltage.com blog.

Leave a Reply

Your email address will not be published. Required fields are marked *