Wednesday, September 25, 2013

ICMC: Panel: How Can the Validation Queue for the CMVP be Improved

Fiona Pattinson, atsec information security, moderated a panel on CMVP queue lengths. Panelists: Michael Cooper, NIST; Steve Weymann, Infoguard; James McLaughlin, Gemalto; and Nithya Rahamadugu, Cygnacom.

Mr. McLauglin said the major issues for vendors with these queue lengths, because it impacts time to market. If you don't get your validation until after your product is obsolete, that can be incredibly frustrating. It's important to take schedule and time to market into consideration, which is impossible when you cannot anticipate how long you'll be sitting in the queue.

Mr. Weymann expressed how frustrating this is for a lab to be able to communicate to their customers what the expectations are and what to do when the implementation guidance is changed (or as CMVP would say: clarified) while you're in the queue - do you have to start over?

Mr. Cooper, CMVP, expressed that there is a resource limitation (this gets back to the 8 reviewers). But is simply adding resources enough?  They have grown in this area - used to only have 1 or 2 reviewers.  But does adding people resolve all of the issues?  Finding people with the correct training is difficult as well.  Hiring in the US Government can take 6-8 months, even if they do find the right person with the right background and right interest.

Mr. Weymann posits that perhaps there are ways that could better use our existing resources.  Labs could help out a lot more here, making sure things are in such good shape at submission that the work that CMVP has to do is easier.

Ms. Rahamadugu and Mr. Weymann suggested adding more automation into the CMVP process. Mr. Cooper noted that he has just now gotten the budget to hire someone to do some automation tasks, so hopefully that will result in improvements to pace.

A comment from the audience from a lab suggested automation of the generation of the certificate, once the vendor has passed validation.  Apparently this can sometimes be error prone, resulting in 1-2 more cycles of review.  Automation could help here.

Mr. Easter said they like to complete things in 1-2 rounds of questions to the vendor, and there are penalties (not specified) for more than 2 rounds.  Sometimes the answers to a seemingly benign question can actually bring up more questions, which will result further rounds.  Though, CMVP is quick to want to have a phone call to resolve "question loops".

Mr. Easter noted that he tries to give related reviews or technologies to one person. This often helps to speed up reviews, reducing the questions. On the other hand, that puts the area expertise in the hands of one person, so when they go on vacation - there can be delays.

Mr. Weymann noted that it seems that each lab seems to gather different information and presents it in different ways. For example, different ways of presenting which algorithms are presented, key methods, etc.

Concern from the audience about loss of expertise if anyone moves - could validators shadow each other to learn each other's area of expertise?  Or will that effectively limit the number of reviewers.
Vendors feel like they have to continually "train" the validators, and retrain every time - frustrating for vendors to do this seemingly over and over.

A suggestion from the audience: could labs review another lab's submissions before it went to CMVP?  This is difficult, due to all of the NDAs involved.  Also, a vendor may not want a lab working with a competitor to review their software.

Complicated indeed!

This post syndicated from: Thoughts on security, beer, theater and biking!

No comments:

Post a Comment