Re: [CVEPRI] Increasing numbers and timeliness of candidates
>1) Application of content decisions - having a 1-2 month "perspective"
> on multiple, closely related vulnerabilities helps us to use the
> right level of abstraction for candidates. With less time for the
> community to find all closely related vulnerabilities for an issue,
> or for details to be "leaked out" after an initial delay, we are
> more likely to make abstraction errors. In the past year and a
> half, the content decisions have been modified so that they are
> less "brittle" with respect to the amount of information that is
> available at any one time, but they cannot be expected to handle
> imperfect or incomplete information, especially without clear
> vendor acknowledgement of the issue.
> The result may include: (a) an increase in the number of RECAST
> operations that would be required to SPLIT or MERGE items, which
> may increase database maintenance for CVE compatible vendors, as
> well as confusion for CVE end users; or (b) merging newly
> discovered issues with existing CANs if the CDs already dictate it,
> sort of a "soft recast" to use parlance from an old Board meeting.
> Some Board members may argue that content decisions related to
> abstraction are not that essential, and that we should tolerate
> some error in abstraction. However, I believe that as product
> liability and security becomes more important and quantifiable,
> good metrics will become more important. Content decisions ensure
> that CVE-based metrics are as reliable as possible. This will help
> ensure that comparisons between products - whether software
> products or security products - remain as fair as possible.
I expect that errors in the abstraction level are most likely to be
putting the candidates at too low a level of abstraction. As I
described in an email I sent you directly and not to the board, we
have been thinking in the wrong direction. With the dot notation and
so on, you can only build down into lower abstraction, which requires
getting the level of abstraction correct and high enough the first
time, which can be done only with the knowledge of all related issues.
This is unrealistic and at odds with the requirement of timeliness.
We need a way to build *up* to higher levels of abstraction;
thankfully this is not difficult. This could be simply a field
pointing to a higher level of abstraction CVE entry -- a NULL field
conveys the notion that this entry is at the highest level of
abstraction. Both lower and higher level entries would contain true
information and coexist. So, any mapping would remain valid, and
RECASTS operations would not be necessary. The only problem would be
an ambiguity in the way of counting how many vulnerabilities a
product detects, but I trust the vendors to use the method producing
the highest count :-). Since the method producing the highest count
is the early mapping produced with the lower level of abstraction,
the vendor mappings won't need to change whenever higher level
candidates are introduced (assuming that the initial level of
abstraction is entirely populated).
Does that sound workable?
>2) CANs will be proposed with fewer references, since not every data
> source will have created the references at the time the candidate
> is created. This is especially the case with vendor bulletins.
References are nice, but the main goal of the CVE was to give a
number to an issue so the issue could be discussed. Obviously
references discussing the issue can't make use of the CAN number
until it has been assigned, which requires the release of the number
before there are references about it...
>3) We will begin proposing more CANs before vendors have provided a
> patch or other acknowledgement (typically between 7 and 45 days, if
> the issue was released before a fix was available). This in turn
> will impact the vendor acknowledgement field that we provide to
> voters. Subsequently, this impacts the level of confidence that a
> voter may have in an issue; without vendor acknowledgement, a voter
> may be more likely to NOOP an issue.
> This will result in more "permanent" candidates and/or delays in
> moving candidates to the entry stage, because there will likely be
> fewer supporting votes, overall.
> There will be several ways to combat this:
> (a) more frequent voting by more Editorial Board members
> (b) notifying voters when the vendor acknowledgement status has
> changed, in case some want to change their NOOP.
> (c) figuring out a method that allows voters to do a "conditional
> acceptance" if one or more Board members replicate the issue.
You may cluster and propose the candidates later to the board,
whenever you feel that all the information and references you are
likely to get are there. What's important is to get that number out
so people can use it. Timeliness is *not* at odds with the voting.
You said a few months ago (November) that you would be making
non-reserved candidates available on the CVE web site before they had
been proposed to the board. Is it just happening opposite of that
only this time, or are you going back to putting the candidates on
the web site only after clustering and proposing them?
I just wish you'd release what you have when you have it. Let's put
aside the fact that NIST needs to process 48 vulnerabilities/week
(2460/52) to keep up with your projected rate, instead of their
commitment to 40. At 40 vulnerabilities/week, it's going to take more
than 7 weeks for ICAT to get through this batch, with the
corresponding delay for Cassandra notifications (more than 6 weeks at
48 vulnerabilities/week). In the end, the delays introduced for
large batch processing are multiplied down the road. You're damming
the river and then letting it all go, and we get flooded.
Pascal Meunier, Ph.D., M.Sc.
Assistant Research Scientist,