What is "fuzzy" for? (Re: Kosko Profile in IEEE Spectrum)

Jive Dadson (jdadson@ix.netcom.com)
Tue, 5 Mar 1996 18:14:44 +0100


In <4h1iesINN239@bhars12c.bnr.co.uk> pgh@bnr.co.uk (Peter Hamer)
writes:

> ...
> For example the membership function "is tall" captures the belief of
> its creator in the "applicability" of the term to a person of various
> heights.
>
> --- Checkpoint. How wrong am I so far? ---

Not wrong, but there is a lot of belief lurking about in that word
"applicability". The membership function will no doubt be used to
associate "is tall" with some probabilistic consequence of being tall,
perhaps "plays basketball", "jumps high", or "is a horserace jockey".
The use for which the membership function is intended constrains its
shape. David Hume said something to the effect that all our reasoning
under uncertainty derives from the belief that similar causes lead to
similar effects. He was one smart cookie. Our experience observing the
jumping ability of person 6 foot 1 influences our belief about the
jumping ability of a person 6 foot tall more than it does our belief
about the jumping ability of a person five foot two.

"Fuzzy" and "smoothing", (I see no difference), are second-order
measures concerned with our beliefs about that similarity map -- about
the effective range of "similar": How abruptly do we believe
consequences will diverge as causes diverge? In other words, fuzzy and
smoothing quantify our beliefs about the differential properties we
expect in the probabilistic function we attempt to estimate that maps
observations of causes either to probabilities of observed consequences
or to maximum likelihood ("crisp") consequences. Our belief is based on
experience with systems we have seen previously which we believe may be
similar, and on the actual data collected for the system under
construction: We prefer simpler, smoother answers when data is sparse
in part because there are fewer of them than there are wild
"over-fitted" solutions, and particularly because they more often prove
out when more knowledge is gained. Occam's razor is Bayesian prior
belief. The more actual observations we have made, the more willing we
are to consider a "bumpy" similarity map to be reasonably correct. We
update our beliefs, allowing us to refine the "is tall" perhaps into
"is slightly tall", "is moderately tall" and "is waaay tall".

--- Checkpoint. How wrong am I so far?

Jive