> I do not agree with the view that fuzzy contains
> probability. When the basic notions of semantics,
> measurement, phenomena, and models are sorted out,
> it will be found that fuzzy forms the general
> paradigm for measurement/description, from which the
> idealization of point measurement may fall out as
> a special, limiting case.
I agree completely that "fuzzy" as it is typically presented
is about measurement and description. My question is, in what
way does fuzzy differ from the kind of evidential generalization
statisticians call "smoothing"? In particular, isn't a fuzzy support
set precisely the same thing as a kernel or radial basis function?
Are you familiar with the literature that develops smoothing,
or rather constraints on crispness, as a consequence of
Bayesian prior-beliefs concerning the kinds of functions that
might model the unknown process being studied? Occam's razor is
a anthropomorphic blade. I am just beginning a study of that
subject, so I am not a prepared appologist for it, but on the
surface it looks very fascinating. It is not just philosophy.
People are writing actual computer programs based on the
analysis.
Finally, if fuzzy is substantively different from smoothing,
could you give an example of an actual construct -- a computer
program preferably -- which uses fuzzy to some purpose other
than smoothing? I have made this request repeatedly, with no
takers. I am genuinely interested, and would be pleased with
a concrete example.
Best regards,
Jive