RE: [TECH] Diligence Levels for Candidate Assignment
Andre Frech asked:
>- What are suggested procedures for reviewing the existing candidate
>and CVE lists to determine if a similar issue already exists? Perhaps
>a checklist would be helpful.
I am planning to write one up as part of an introductory document for
those who request candidate numbers.
For the moment, a search on the CVE web site should be sufficient with
respect to seeing if there's already an existing candidate/entry.
CVE's keyword search returns both entries and candidates, and it does
some approximately intelligent matching that can make it more useful
than other search engines, in some ways. (This is handled by the
thesaurus in CMEX.) For example, all these terms are equivalent in
the CVE search engine: Wuftp, Wuftpd, Wu-ftp, and Wu-ftpd; pfdispaly
and pfdisplay; buffer overflow and buffer overrun; SMTP and SMTPd; IE,
Internet Explorer, IE4, and IE5; etc.
>- Which candidates or entries are prone to encompass larger issues?
This will also be documented at a high level. Basically, anything
that's not a software bug has level of abstraction issues, and can be
difficult to search on for the reasons you described. Some high-level
candidates do include "classes" in the descriptions; for example, a
search for "ddos" will get you CAN-2000-0138, or "Trojan horse" will
get you CAN-1999-0660 and -0661, etc.
>Perhaps a short list of the virus, DDoS, backdoor, etc. entries that
>may not come up during a search would be appropriate.
I agree, and this is planned as part of the introductory document.
>- When in doubt, is there an impartial and trusted source available
>for verifying decisions or resolving questions?
One of the major concerns with opening up this process to arbitrary
people was in making MITRE aware of non-public issues on a much larger
scale, in some cases even before the vendor is aware of the problem.
In cases where the discoverer is already working with the vendor on a
solution and coordinated announcement, I see this as less of a
problem, so more detailed technical exchange (like the ones we've had
in the past) may still be feasible for "trusted" discoverers.
However, in general, an email discussion or quick phone conversation
could resolve the questions without ever going into specifics. For
example, I could see if the problem is affected by the "Same Codebase"
content decision by asking "does this bug appear in different software
packages by different vendors?" etc. For all the advisories that have
been published with CAN numbers so far, on reflection I believe that I
could have provided sufficient guidance in all cases without ever
knowing the specific product that was vulnerable.
The introductory document could ask some of these questions, and
provide guidance, without mentioning content decisions. And while it
has been complicated to create, I've been working on a "decision tree"
that people could easily navigate in order to determine the
appropriate level of abstraction to use, whether or not the item
belongs in CVE, etc. And no, Spaf, it doesn't look like something out
of Krsul's thesis ;-)
Until such documentation is fully available and deemed effective, some
candidates will be assigned incorrectly through no fault of the
requester. So it shouldn't have a negative impact on their diligence
level just because we're still trying to figure out exactly what we
should do :-)
Proposal and discussion of CD's will begin next week. Thus they will
be documented, and they will, in turn, help to provide guidance to
I hope this answers more questions than it raises. I would be
interested in hearing from vulnerability database maintainers if they
have any formalized, documented rules with respect to their own