Re: INTERIM DECISION: ACCEPT 5 SA category candidates (Final 9/28)
>I would argue that [finger and rusers] aren't even exposures given the
>description. To be a problem the following needs to be true:
>1) The service needs to be accessible to a malfeasor (externally or
Obviously, it is dependent on the specific environment as to whether
this is true or not. Network and system configuration dictate this.
There's also a question of who/what should be regarded as a malfeasor.
The most sensitive, restrictive environment may assume a malfeasor
with infinite resources and capabilities, whereas a completely open
environment may not (or, be willing to accept the cost/benefit ratio).
>2) The service needs to respond to requests from the malfeasor with
>correct, useful information.
This is the expected behavior of finger, assuming the malfeasor has
access as specified in (1).
>3) The system the service is running on must have some other
>vulnerability that can be exploited.
In the case of finger or rusers, there are a few significant
*potential* problems that could be exploited which could lead to
compromise of a system. For example, a user's password might be
easily guessed from finger information if the password is based on
information available from finger. (Tools at least as far back as
SATAN have done fairly good jobs with this.) A related *potential*
problem is a default password, or a null password.
In these situations, finger information could be regarded as an
exposure since it releases information that could be used as a
stepping stone to a compromise. In a specific environment at a
specific time, such a problem may truly exist, whether or not it is
known or observable by a human.
>4) The system needs to be accessible so that vulnerability can be
I agree, but as in (1), I think this is environment-specific.
>I run a version of finger on my machine. It returns information that
>may or may not be accurate. It may not respond to requests from some
>hosts and domains. My machine is otherwise pretty tightly configured,
>so people knowing that there is a user 'spaf' on my machine isn't a
>problem (as if they couldn't guess that otherwise). I am basically
>the only user on my machine. So, is "finger" still an exposure
>because it is running?
In this case, I wouldn't think of it as such. Finger on a
well-configured machine that keeps up-to-date with all patches
shouldn't be a problem. But if finger says a user "spaf" is on the
system, and spaf's password is "spaf," then I'd say it's a problem.
There are situations - and other policies - that would treat this as a
significant concern, so CVE needs to recognize that. The CVE entry
for finger wouldn't apply in your policy, but it could in someone
>And I won't even mention the policy problem again. :-)
Interpreting the vulnerability or exposure in light of "Policy" is
definitely a problem until we can collectively find a good way to
effectively specify unambiguous policies. But I think it's more than
policy. The interpretation of the particular security problem needs
to be done in light of the specific state of the environment in which
the bug/configuration is being observed, regardless of what the policy
is - at least from an enterprise security perspective.
An administrator might not think that the nastiest root-access buffer
overflow is a problem, if the box only operates in single-user mode in
an area that requires physical access by a small number of highly
trusted individuals who authenticate through biometrics. Obviously
this is an extreme example from an operational perspective, but the
punchline is that as long as some vulnerability/exposure is considered
such within the context of *some* reasonable security policy, then it
should be included in CVE, so that CVE can be useful to a broad
variety of policies and environments. There are some security
policies that require disabling particular services because they are
regarded as providing too much information, thus finger should be
included in CVE. However, some CVE users may never have a need for
that particular entry.