RE: [CD] CD Proposal: SF-LOC (Software flaws in different lines o f code)
> -----Original Message-----
> From: Bill Fithen [mailto:email@example.com]
> > The recent problem in the Linux kernel illustrates the
> issue. So should we
> > issue several dozen CVE entries about apps that su because
> of one flaw in
> > certain Linux kernels? Or should we issue one entry that
> says this one flaw
> > causes a lot of problems? I'll go for one entry.
> The problem here is one of perspective.
> At one extreme is the software engineering security analyst who is
> At the other extreme is the system administrator whose job it is to
Actually, that admin ought to also understand what it means if it is not the
> From the beginning CVE has been oscillating between these extremes
> because we have on the board members that are representative of these
> two extremes and everything in between.
My perspective comes from 2 experiences - one is maintaining a security
auditing tool, and seeing the set of checks between it and the competing
tools become incomprehensible to even advanced users due to a lack of common
nomenclature. That's what we're trying to avoid. So now we have this
wonderful CVE thing, and we're pouring vulns into it left and right. If
we're not careful, we'll end up with a big list full of garbage, and then
the customers of said vendors will come along and see that the CVE list has
a bazillion entries, but the auditing tools only check (a bazillion/5)
entries, and make the vendor's lives miserable on that basis because the
vendor can't possibly write checks for something that is either poorly
documented or junk.
As a _user_ of the vendor's products, I know that they love to claim that
they have some number of checks >> # of competitor's checks. Every vendor
I'm aware of is guilty of this to some extent. As a security admin, the last
thing I want to see is ONE problem causing the number of security issues to
go through the roof - screws up my ops people, and screws up my users. I
don't have the luxury of a purely academic interest in these bugs - I've got
a real network to secure here.
My agenda (other than representing the interests of my employer) is to try
to minimize the amount of junk that shows up in this list. If we end up with
a huge list full of garbage, we'll have failed because it won't be useful to
anyone for anything.
> My hope is that the more mainstream the product and the more
> significant the vulnerability, the easier we will find collecting the
> necessary information.
I think that this will be true, and that people will be more interested in
analyzing the problem.
> I agree. I use "rule" above in the natural language sense, not in the
> formal logic sense. None of these CD's can have logically consistent,
> universally applicable rules.
The problem space is too large - we're going to have to be flexible. In
light of the above, I think we have significant incentive to avoid
duplicates - but data quality really has to be foremost.
> > I'd agree with this - but a merge doesn't entail
> irrevocable information
> > loss - we still have the original source reports, and we
> can still split
> > something later if we really need to. We will probably err
> in both ways.
> I also agree with this. But, I wasn't thinking so much in terms of
> irrevocable loss of information than I was thinking how an uninformed
> user of CVE might interpret the absense of the supporting information
> that we squirreled away against the day when we might need to
> reconsider the merge. From our perspective, no matter how we represent
> the resulting CVE entries, barring some catastrophe at MITRE, we will
> always have the complete set of information we had originally to
> reconsider their representation. But CVE users will not have the
> benefit of that hidden information. For that reason, I favor a
> mistaken split over a mistaken merge.
This is why I think what we really have here (whether we admit it or not) is
a database, and not a list. Part of the database ought to include original