Re: fuzzy logic and probability

S. F. Thomas (sthomas@decan.com)
Wed, 24 Jul 1996 14:44:02 +0200

Michael D. Kersey (mdkersey@hal-pc.org) wrote:

: A membership function for say, the variable "height", could be formulated in the
: following manner:
: 1) Define or select from the language a set of labels, say the set
: { short, medium, tall },
: 2) Using these terms, poll a sample of persons, asking them to
: classify other persons of measured height as being short, medium, or tall.
: From this we could infer the probabilities p(x,h) that a person of a given height
: h be classified as x = {short, medium, or tall} by someone from the general
: population of persons.
: 3) Plot p(tall, h). It may not be a monotonically increasing function of height,
: ( e.g., perhaps wider persons are perceived as being less "tall", and we got
: a sample including a significant number of wide persons in one height range ).

: Now p(x,h) are probability functions, but can also be used as membership functions.

Not so. For given x, and h varying over the space of heights, p(x,h)
would trace out a function which varies between 0 and 1, but which need
not integrate to 1. It satisfies the minimal definition of a membership
function, over h-space, but not that of a probability distribution. With
respect to the x-space, for given h, and provided additional conditions
are imposed, namely that the labels in the label set be collectively
exhaustive and mutually exclusive, yes the p(x;h) would constitute a
probability distribution.

This formulation reveals the essential duality which seems to have
escaped so many people, probabilists and fuzzicists alike, but which is
so breathtakingly simple. The function over x (h given) defines a probability
distribution, while the dual function over h (x given) defines a
likelihood function. Thus the membership function is in essence
a (semantic) likelihood function, but seen in a way that Fisher never
conceived of his concept of mathematical likelihood. With this simple
realization, the artificial gulf between fuzzicists and probabilists
may be bridged. But more importantly, the flow of insights works both
ways. Statisticians have known for years that uncertainty of the
likelihood sort (with respect to unknown parameters) is different from
uncertainty of the probabilistic sort (with respect to so-called random
variables). Thus, in the classic formulation where X is a random
variable, x a generic point in the sample space over which X ranges,
and a probability model f(x;w) is asserted, where w is however an
unknown parameter, ranging over a parameter space W, we have dual
representations of the uncertainty involved: for given w, the probabilistic
uncertainty implicit in f(x;w); and for given sample observation x0,
we have the likelihood function L(w)=f(x0;w). Statisticians have
thus far failed, however, to develop a likelihood calculus as easy
of manipulation as the probability calculus. In particular, rules
for the likelihood evaluation of composite hypotheses, and therefore
for marginalization and for arbitrary function transformations of
the unknown parameter, have been problematic. As one example,
the simplest expedient of marginalization by maximization can be
shown to be misleading in examples that are by now well known.
The Bayesian resolution was to treat the unkown parameter as though
also a random variable, though no longer of the simple frequentist
sort. In so doing, it treats uncertainty of the likelihood sort
exactly as though it were uncertainty of the probability sort, and the
evaluation of composite hypotheses, and thence of marginalization and
transformation of variables, falls to the probabilistic rule of
integration. The fresh Zadehian semantics associated with fuzzy
set theory allows a new look at likelihood uncertainty. Equally,
however, the old concepts of likelihood help to remove a variety of
anomalies present in the foundations of the Zadehian theory, not
least being the logical clarification of the very notion of the grade
of membership itself. Also, the maximization rule of marginalization
and extension (transformation of variable) so beloved by fuzzicists
can be shown to be as problematic in the fuzzy set (and possibility)
theory as it has been known to be in the statistical domain.
It isn't always wrong--just sometimes--and it is important to know when.

(( cuts ))

: Of course, there's reason to worry about intuitive appeal. It's not formal. But the
: history of mathematics is littered with objections to useful tools that only later found
: their formal mathematical foundations ( e.g., Dirac delta function, the calculus'
: infinitesimal, etc. (and who was the British engineer who championed use of transforms
: for electrical engineering problems before the theoretical support was there? )). And it
: seems fuzzy is more than holding its own ground on both practical and theoretical
: grounds.

As should be clear from my earlier remarks, I am not one of those
who dismiss an entire theory because it is wrong in a few particulars.
Newton's theory is still useful even if it has been superseded by
Einstein's. Nor do I think it was necessary or useful for Einstein
to heap scorn on Newton along the way to proposing an improved theory...
he stood on Newton's shoulders after all. It also would have been quite
unproductive for Defenders of Newton to heap scorn on Einstein for
daring to be apostate. Similarly, what this debate needs is fewer people taking
up cudgels... on either side... and more of what Wittgenstein called
"the logical clarification of thoughts", fearlessly letting the chips
fall where they may.

: Michael D. Kersey

Regards,
S. F. Thomas