Notes from the 2011 Key Management Summit – the rest of the first day

After the keynote talks, there were other talks on the first day of the 2011 Key Management Summit from which I learned an interesting thing or two. In particular:

Anthony Stieber, who works for, but does not represent a large bank, talked about how it's actually cheaper to keep sensitive data around than to destroy it. That's something that I hadn't heard before, and I'd be interested in hearing more details of that claim in the future.

He also talked about how common it is to use the current time as a seed for a pseudo-random number generator. The output of a PRNG is only as good as the random seed that's used to initialize it, and it's apparently very common for people to use the extremely low entropy current time for this.

Elaine Barker of NIST talked about the move to 112 bits of strength that NIST is now requiring. The people at this meeting were probably the wrong audience for this talk. Everyone there knew about this requirement.

But there are still people out there that don't know about this yet. If you're one of them, read NIST's SP 800-131A, (PDF) "Transitions: Recommendation for Transitioning and Use of Crypographic Algorithms and Key Lengths" as soon as you can.

Ramon Krikken of the Burton Group talked about how people are still worrying about how to encrypt sensitive data and haven't yet tried to solve the harder problem of managing the keys that they'll need to encrypt that data. He also expects that people will be surprised by how hard key management is when they eventually try to do it.

He also talked about how tokenization is actually a form of encryption, despite the marketing spin from tokenization vendors that might try to convince you otherwise. He also talked about how it's probably possible to model the security of tokenization systems using the existing framework that we have for encryption schemes. A tokenization server certainly looks like a random oracle, doesn't it?

He also mentioned how it seems that vendors are calling tokenization "tokenization" instead of "encryption" to convince their customers that by using their technology they will avoid having to comply with the parts of the PCI DSS that require strong key management to support any encryption that's used. It certainly sounded like the PCI SSC people ought to talk to Ramon about this.  

Ramon also mentioned how it may end up being the case that so-called silos of key management may not actually end up being a bad idea. If this is true, that may make it much easier for the people working on the KMIP standard. After all, if you really don't need a general key management protocol that works absolutely everywhere, you can just focus your attention on the areas where there's actually a pressing need for an interoperable key management protocol. Like in storage, for example.

Chris Kostick of Ernst & Young talked about you can use your auditors to help you create a sustainable key management program. He recommended that you don't audit encryption, but audit key management instead. He also mentioned that he's often asked how to tell if data is actually encrypted.

If an encryption scheme is IND-CPA secure, for example, then ciphertext is indistinguisable from random bits. Because of this, the very definition of IND-CPA security means that you really can't tell if data is really encrypted because you can't tell if a blob of bits is a ciphertext or just random values. You may be able to look at the format of the data, but just because something is formatted as a PKCS#7 blob doesn't mean that it actually contains ciphertext. Apparently Chris spends a lot of time explaining this to his clients.

Leave a Reply

Your email address will not be published. Required fields are marked *