In the second post of our two-part series, we continue our discussion with panelists from Computer Sciences Corporation: Lachlan Turner, Jason Cunningham, and Maureen Barry. Continuing where we left off with last week’s post, we’ll dive deeper into entropy and answer some of the many questions now arising about new requirements, entropy testing and tools and how all of this might affect your upcoming FIPS or Common Criteria evaluations.
What do vendors have to do to meet the entropy testing requirements?
Vendors must prove to the testing laboratories that their entropy source conforms to the requirements in NIST’s Special Publication 800-90B. In general terms, this requires:
- Identification and specifications of the entropy source
- Identification of whether the entropy is independent and identically distributed (IID) or non-IID
- Justification as to the randomness of the entropy
- Subjecting a sampling of the entropy to statistical testing
- Proof that adequate health tests are implemented
If vendors are using third-party modules for their products, it’s best to choose those where access to information about sources of entropy is available. If a module has already been chosen, start a discussion with the third-party provider on how to best approach the entropy testing process. The key is that in the design process, vendors should look for solutions where they can get entropy that is as random as possible.
While nothing is guaranteed at this stage, our experience has shown that when accurately demonstrated, NIST generally accepts the justification of sufficient entropy stemming from the character device /dev/random, available in the Linux-based pseudo-random number generator (PRNG). The counterpart in that Linux-based PRNG, /dev/urandom, however, is not currently allowed by NIST due to its non-blocking characteristic.
Is there a way for product vendors to perform entropy testing on their sources before they enter into evaluation?
Unfortunately, there is really no way to know if a vendor will pass entropy testing at this stage. The most that can be done is to present arguments to a certification body, but because this is such a new area, there is no way to know for certain that an argument will stand up and something will pass.
There are test tools available that are helpful, and running a sample entropy output through one of these test tools can certainly give an indication of the sufficiency of the entropy source. A vendor can independently perform its own entropy testing against the NIST publication SP 800-90B—this is the gold standard right now. (The applicable concepts required to gauge whether the entropy source is sufficient are all outlined in that publication.) Vendors may, however, need some consulting help from a Cryptographic Security Testing laboratory for that. Engaging with a consultant early on may also help identify any red flags that could hold up the process (for example, the use of dev/urandom).
What tools can be used for entropy testing specifically?
No officially sanctioned tool exists for entropy testing. As it stands, the Cryptographic Security Testing laboratories are responsible for measuring entropy samples using their own methods and tools. Several third parties have created their own tools for entropy testing. Some tools are available in the public domain, and incorporate some or all of the NIST SP 800-90B requirements.
For instance, the Python testing tool is available upon request from CAVP to labs and vendors for entropy testing. It is a fairly primitive program, but can be useful and at some point there will be a GUI interface for it.
At this point in time, the Common Criteria schemes do not rely on the entropy testing tools, and including output in the Entropy Assessment Report is entirely optional. Our experience indicates that CSEC (Canada) and NIAP (U.S.) are more interested in an explanation that the input to the entropy source (i.e. noise source) contains sufficient entropy itself to justify the encryption strength of the resulting keys that the TOE will generate. Any use of the tools would have to focus on the noise source data, which is problematic. Measuring the output of the entropy source, after post-processing of the entropy has occurred, does not appear to be acceptable.
How do you deal with third-party entropy sources if the vendor does not have access to all internal technical details?
It’s possible that a vendor may not have the source code or design information regarding the entropy source. Typically, if the entropy source is a True Random Number Generator (TRNG) such as one might find on certain processors, there may be sufficient specifications from the manufacturer detailing the product, such that the requirements of NIST SP 800-90B could be addressed.
Are vendors required to use a hardware noise source for entropy generation to be FIPS 140-2 validated or CC validated against a NIAP PP?
The use of a hardware noise source isn’t a requirement, but it is highly recommended. The entropy source identified by the vendor will be tested per the requirements of NIST SP 800-90 (as well as any supplemental FIPS or CC programmatic guidance), and an entropy testing verdict will be rendered.
Entropy that is found to fail the mathematical testing outlined in NIST SP 800-90, or entropy sources that contain inadequate health testing, will be considered insufficient by the laboratory.
While not required, there are some benefits to using hardware noise sources. There are commonly available hardware-based entropy sources that are built in to some CPUs (for example, Intel’s Ivy Bridge processors). These hardware-based solutions have been found to produce quality entropy very quickly, so are ideal for use in systems where the entropy pool can become quickly depleted.
How long is the process of evaluating entropy adding to evaluations?
For FIPS, the CMVP requires a report containing justifications, so it can add about a week of lab time onto the process — this includes all the components involved: source code review and writing the entropy justification. On the vendor end, there is then additional time. Because this is fairly new guidance, we can’t always estimate what CMVP will require. Labs are providing the information we believe we’re being asked for, but we’ll have a better feel for what is truly required in the future.
For Common Criteria in the U.S., there has been a starting gate implemented requiring that the entropy source be evaluated and approved prior to a vendor actually starting a CC evaluation. Turnaround times will likely improve, however the impact here is potentially quite large – for now one should assume a two-month to three-month delay waiting for entropy review. Because it’s so new, we’ve only just had our first submission approved in Canada, which fortunately occurs in parallel with the rest of the evaluation and therefore has less of an impact.
Panel members from Computer Sciences Corporation (CSC) are:
Lachlan Turner is the Technical Director of CSC’s Security Testing and Certification Labs with over 10 years of experience in cyber security specializing in Common Criteria. Lachlan served as a member of the Common Criteria Interpretations Management Board (CCIMB) and has held roles as certifier, evaluator and consultant across multiple schemes – Australia/New Zealand, Canada, USA, Malaysia and Italy. Lachlan provides technical leadership to CSC’s four accredited CC labs and is passionate about helping vendors through the evaluation process to achieve their business goals and gain maximum value from their security assurance investments.
Jason Cunningham leads the FIPS 140-2 program at CSC and has over 10 years of experience in IT security. Throughout his career, Jason has been involved in numerous security related projects covering a wide range of technologies.
Maureen Barry is the Deputy Director for CSC’s Security Testing and Certification Labs (STCL) and primarily manages the Canadian laboratory. She is also a Global Product Manager responsible for developing, managing, and executing the Cybersecurity Offering program for STCL across four countries: Canada, USA, Australia and Germany. She has almost 10 years of experience in Common Criteria in addition to over 10 years of experience in IT.
Corsec Lead Engineer Darryl Johnson was also a member of the panel discussing entropy testing and contributed to the writing of this post.
For help with your FIPS 140-2 or Common Criteria evaluation, or if you have questions about entropy testing and how it might affect your next evaluation, contact us.