In the world of cryptography, data is only safe as long as the keys used to protect that data are kept secure. While, on one hand, this means that keys must be protected against unauthorized access, it also means that keys must be created in a way that makes them difficult for an attacker to guess. To produce cryptographically strong keys, cryptographic modules use random number generators, or RNGs, which in turn rely on random data as input. This random input data is called entropy, and is the foundation of a secure cryptographic module.
I had the opportunity to discuss entropy with the great group over at Computer Sciences Corporation (CSC). The panelist included Lachlan Turner, Jason Cunningham, and Maureen Barry. In our first of a two-part series, our panel answers some questions to offer insight into what you need to know about entropy and how it could affect your Common Criteria or FIPS evaluation.
What does entropy mean?
The term entropy loosely translates to, “The degree of disorder or randomness in a system.” This is how we describe entropy in computing, although the term is also used in thermodynamics. For our purposes, however, it is the random data collected from electronic sources, for use in computing applications.
Entropy is a measure of randomness often expressed and measured by bits. The more entropy you have feeding into a given value, the more random that value will be.
Why is entropy receiving so much attention when programs are already testing cryptographic algorithms?
The self-tests implemented to test cryptographic algorithms are considered to be a health check, which ensures that they can mathematically and procedurally operate as they were intended to. This concept is different from that of entropy testing.
Most modern day cryptographic implementations rely on the use of sufficiently random data in order to ensure a high degree of secrecy when establishing shared secrets, or creating the data required to generate cryptographic keys. The random number generators that typically rely on this input to be random can only produce sufficiently random numbers if the input they require also contains a high degree of randomness. The entropy serves as that high degree of randomness.
When it comes to entropy, an old saying applies. “You will get out of it, what you put into it.” Since the quality and quantity of entropy is the foundation of cryptography, it’s vitally important that entropy be considered as part of the testing process.
What challenges do vendors face when trying to measure their product’s entropy?
The information coming from NIAP, CMVP, and the other validation program bodies is that vendors have to understand what sources contribute to their product’s overall entropy and how many bits of entropy are contributed by each source. That can be quite difficult. Quite often the crypto modules that are used in products are created by third parties, and vendors don’t really know what happens “under the hood.”
Another challenge comes from the need to measure the entropy at the appropriate point in the overall process. Many systems will take a value produced from entropy sources and “condition” it before using it as input to the random number generator. However, testers want to see entropy measurements performed on the pure, pre-conditioning value, but these values cannot always be captured.
What are the requirements for entropy in Common Criteria and FIPS evaluations?
Thus far, the entropy requirements for CC and FIPS have only been loosely defined through draft publications. That is not to say, however, that there isn’t a framework in place. The Computer Security Division at NIST has completed a publication that encompasses the testing of entropy. It is anticipated that the concepts in the publication will soon form the basis of all future entropy testing for FIPS 140-2 (and possibly Common Criteria).
From a Common Criteria perspective, there is an NIAP-approved Protection Profile (PP) and within that PP is an annex with an entropy profile. From a practical standpoint, a vendor has to describe the entropy; that is, the vendor needs to document what entropy source is actually producing random data. Examples could be ring oscillators, keyboard key presses, noisy diodes, mouse movement, or disk input/output operations. The requirements are to describe what the sources are and then describe what is done with those random event values (i.e., what is done to condition them), and what is the interaction between the entropy source and the crypto module. There are also requirements around health testing.
In the end, vendors are required to provide a justification (supported either by test data or mathematical models) that demonstrates how many bits of entropy are being generated. That justification must include a good argument for why it’s sufficient. This justification area is currently evolving and is a bit grey.
For FIPS, things are very similar to Common Criteria. The CMVP released guidance that says any type of analysis that provides information regarding sufficiency of a crypto module’s entropy will be considered — they understand that there is no perfect way to quantify it. Statistical analyses can be conducted or source code can be analyzed to mathematically support a vendor’s claim that their entropy is sufficient for generating random numbers. NIST doesn’t really come right out and call it entropy. This process is part and parcel to the strength of the key generation method. They want to know everything that happens before the data goes to an approved RNG.
There is quite a bit of confusion right now about entropy — hopefully, we can clear a bit of it up. In our next post, we’ll dive a bit further into entropy testing, touching on what vendors need to do to meet the entropy requirements, what entropy testing tools are available, and how much time entropy testing is adding to evaluations.
Panel members from Computer Sciences Corporation (CSC) are:
Lachlan Turner is the Technical Director of CSC’s Security Testing and Certification Labs with over 10 years of experience in cyber security specializing in Common Criteria. Lachlan served as a member of the Common Criteria Interpretations Management Board (CCIMB) and has held roles as certifier, evaluator and consultant across multiple schemes – Australia/New Zealand, Canada, USA, Malaysia and Italy. Lachlan provides technical leadership to CSC’s four accredited CC labs and is passionate about helping vendors through the evaluation process to achieve their business goals and gain maximum value from their security assurance investments.
Jason Cunningham leads the FIPS 140-2 program at CSC and has over 10 years of experience in IT security. Throughout his career, Jason has been involved in numerous security related projects covering a wide range of technologies.
Maureen Barry is the Deputy Director for CSC’s Security Testing and Certification Labs (STCL) and primarily manages the Canadian laboratory. She is also a Global Product Manager responsible for developing, managing, and executing the Cybersecurity Offering program for STCL across four countries: Canada, USA, Australia and Germany. She has almost 10 years of experience in Common Criteria in addition to over 10 years of experience in IT.
Corsec Lead Engineer Darryl Johnson was also a member of the panel discussing entropy testing and contributed to the writing of this post.
For help with your FIPS 140-2 or Common Criteria evaluation or for additional questions about entropy testing and how it might affect your next certification, contact us.