: I am interested in a certain question concerning fuzzy logic
: and probability. I am trying to figure out whether there are
: theoretical reasons to prefer one to the other in an application
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
: concerning the computation of degrees of certainty. (I am using
: the term `certainty' informally; I don't want to interpret it
: according to any particular theory.)
In general, I would think the answer is yes. When each
(fuzzy and probability) is developed from fundamentals, they
may be seen to be complementary devices for being precise
about uncertainty. The fundamentals that I speak of must
include a philosophy of semantics and of measurement. In
addition one must elaborate what one means by the notion
of the phenomenon, and of model (of a phenomenon), since
what both probability and fuzzy models purport to do is
characterize phenomena, or instances of phenomena, in some sense.
As long as these fundamentals remain unaddressed, there is really
no hope, in my opinion, of resolving the disagreements that persist,
both in the foundations of statistical inference, and as
well in the foundational aspects of fuzzy set theory. When
addressed, however, the complementarity that exists between
fuzzy and probability becomes crystal clear, and the line
of demarcation between the two may be very clearly drawn.
: Here is a little background. I am working on a system to do
: failure diagnosis for heating, ventilating, and air conditioning
: equipment. I have chosen to use the `Bayesian' interpretation
: of probability (apparently due to de Finetti) for modeling
: uncertainty in the system. However, there are people for whom
: I am working (indirectly) who might ask, ``Well, why didn't you
: use fuzzy logic?''
Failure diagnosis sounds very frequentist in essence. But
there is uncertainty in your (frequentist) probability models,
I assume. You choose to model this (second-order) uncertainty
using Bayesian methods. Here is where I would part company.
As captured by the data (failure rates presumably), the uncertainty in
your (frequency) model parameters will be represented by a
likelihood function, while the subjective Bayesian approach
forces you to adopt a prior (subjective) probability distribution
(over parameter space) to represent your starting
uncertainty in the (frequency) model parameters... even if
you actively wish to leave your subjective opinion entirely
out of it. And if there is a group making the decision,
each member of which may have
different prior opinions, your consistency axioms are no use because
the betting paradigm does not a priori seem so compelling
when views may differ, among the relevant group of decision
makers, about willingness to take and place bets. In its
place, a possibility theory deriving from fuzzy set theory
could well be used, not however in place
of the obviously frequentist failure models that must be
used, but in place of the Bayesian prior and posterior models
representing (second-order) uncertainty in the frequency
model parameters. Then you will find that *possibilistic*
prior is qualitatively identical to likelihood function
is qualitatively identical to posterior possibility distribution.
And there is no a priori conceptual difficulty in combining
the possibilistic priors of diverse members of the group
that will (almost inevitably... that's the way industry works)
be guiding your project. Furthermore, no appeal need be
made to the betting paradigm of the Bayesians, yet betting
behavior may be completely explicated, even in the case
where a wise decision-maker may adamantly refuse either
to take or place the Bayesian bets!
: I am aware that proponents of this Bayesian probability theory
: claim that it is the only consistent generalization of binary
: logic (according to Cox's theorems). Where does this leave
: fuzzy logic?
Alone, as Fred Watkins has already remarked. The operative
word here is "consistent", which finds operational definition
within the Bayesian framework by appeal to the betting
paradigm. This paradigm would seem less than compelling if
betting behavior could be explicated without resort
either to subjective probability or subjective utility
notions.
: It would appear that if the fuzzy logic concept of `degree of
: truth' is the same as the Bayesian `degree of belief,' then
: either fuzzy logic is the same as probability or else less
: powerful (as it would have to be inconsistent). One could salvage
: fuzzy logic by interpreting `degree of truth' differently from
: `degree of belief,' but does `degree of truth' then remain useful?
All that is required to salvage fuzzy logic, at least as
applied to explicating the meaning of fuzzy descriptors in
use in the language, is to recognise that the "degree
of truth" is in fact a frequency probability -- that of
a competent speaker of the language using the fuzzy descriptor
in question to describe the candidate element which may
be in question, of the fuzzy set which is in some sense
induced by the fuzzy descriptor. Seen in this way,
the membership function is akin to a likelihood function --
a semantic likelihood function in this case -- induced by
a (measurable, frequentist) uncertainty in the use of fuzzy
terms. In the same way that likelihood -- which varies over
parameter space, as distinct from the sample space (from
which come the data) to which it is related -- is distinct
from, though related to probability, the membership function over a
universe of discourse is not a probability distribution,
but it is related to the sample space of yes/no responses
that would be obtained when asking any speaker whether
he/she would use a fuzzy descriptor (eg. "tall") to describe
any particular candidate element (eg. height value) for
a fuzzy subset consistent with the descriptor in question.
It is the collection of such response probabilities,
and related uncertainty of description (also measurement
in general) that determines a semantic likelihood function
(membership function) over the space of hypotheses
-- now identified with the universe of discourse over which
the fuzzy subset is defined -- when a fuzzy descriptor
is put to actual use. Now, the fact that I identify the grade of
membership with a probability (Bernoulli parameter
actually) certainly brings fuzzy membership values
within the ambit of probability theory, but with the
fresh semantics that in fact allows a reworking and extension of the
likelihood (or possibility) calculus, which is
what competes (successfully IMO) with the Bayesian
approach for the representation of second-order
uncertainty regarding probability models.
I do not agree with the view that fuzzy contains
probability. When the basic notions of semantics,
measurement, phenomena, and models are sorted out,
it will be found that fuzzy forms the general
paradigm for measurement/description, from which the
idealization of point measurement may fall out as
a special, limiting case. What is the descriptor
"tall" other than a term of measurement, gross though
it is? And however microscopic a measurement may
be, isn't it, at the last decimal place representing
the limit of discernment of your measurement device,
ultimately no different in principle from the
fuzzy descriptor "tall"? Measurement/description
contain uncertainty at the level of instances
of a phenomenon. Probability, on the other hand
deals with occurrence uncertainty over an entire
population, or subset thereof. Exact probability
models are only possible when the population at
issue is finite, and all instances have been
observed, measured, and classified. Otherwise,
there will remain residual uncertainty in the
probability model no matter how large the sample.
(Think of repeated tossing of a thumb tack, and
estimating the probability -- the Bernoulli parameter--
of it landing top down.)
But then the residual uncertainty is akin to
measurement uncertainty of instances of a
phenomenon -- the uncertainty is a matter of
precision. It can be shown that this second-order
uncertainty (the precision in the parameters of
a frequency probability model) exactly mirrors
the fuzzy imprecision in the measurement of
instances of a phenomenon. We thus have fuzzy
and probabilistic uncertainty interacting, but
not competing. Probabilistic uncertainty in
the language-use phenomenon (also measurement
in general) leads to fuzziness of measurement
terms/descriptors describing particular
instances of a phenomenon. Probabilistic uncertainty
of occurrence over an entire population of interest
forming part of a phenomenon, leads to fuzzy
(remember likelihood) uncertainty characterizing the
parameters of a probability model. Fuzzy and probability
are therefore not competing -- rather there is a kind
of duality at work between them -- and it is hard for me to
comprehend any sense in which it might be claimed
that one contains the other.
Likewise, I do not agree with the view that fuzzy
logic contains bivalent
logic, at least not in any useful sense. There *is*
a sense, yes, in which crisp descriptors
form a special subclass of fuzzy descriptors (their
membership functions take on bivalent values 0 and 1).
But that is at the level of the object language.
In a more important sense, at the level of the metalanguage,
the whole point of fuzzy set theory is to render precise that which
is fuzzy, and the development of fuzzy set
theory takes place within a metalanguage (ordinary
mathematics) in which the bivalent logic of
ordinary set theory prevails. That is the more
important sense because it would be a hopelessly
quixotic endeavor to attempt, at one and the same
time, to eschew the ordinary set theory in the
metalanguage... *and* to develop the new rules of the
fuzzy logic we hope to set in place in the object
language. Rather, we recognize that what we are doing
is using a sort of bootstrap procedure by which we reduce that
which is fuzzy with respect to some universe, U say,
to something bivalent in [0,1]^U, where the
membership functions (which populate the metalanguage,
remember) reside.
: In particular, can one bet on a `degree of truth' ??
As interpreted above, the clear answer is yes. But
in the context in which the question is asked, you
are clearly proceeding from within the Bayesian betting
paradigm, which as I have earlier remarked is not
as compelling as it is claimed. Within the possibilistic
framework I espouse, belief may simply be stated (eg.
such and thus a probability is "high", or, "most" voters
are against abortion, etc.) and probability models
(possibly with significant second-order uncertainty)
consistent with such beliefs may be elaborated, which
in turn may explicate betting behavior quite well,
up to and including the situation, as earlier stated,
where the decision-maker in question prefers the
status quo of money in his pocket, rather than either
taking or placing the Bayesian bets.
: I've asked this question before, but got no satisfying answers;
: I think it's interesting enough to try again. Please, no homey
: parables about murky water. :) (As an aside to S.F. Thomas, my
: university's library doesn't have your book, and I would rather not
: buy it just to answer this one question; surely there is a brief
: answer.)
That is as brief as I can make it, which is probably
already too long for this medium... in which case, my
apologies. In any case, I hope this response is helpful.
: Regards,
: Robert Dodier
Regards,
S. F. Thomas
PS. I would like to thank JG. Campbell <jg.campbell@ulst.ac.uk>,
who wrote:
: The points you raise are discussed in great detail -- and clarity, I
: think -- in S.F. Thomas, Fuzziness & Probability, 1995, ACG Press; can be
: ordered by e-mail from <brad.brown@acginc.com> approx. $30 +$10 p&p.
: To my mind this is an important book.
I very much appreciate the kind remarks.
And obviously, I find myself in disagreement with Herman Rubin
(hrubin@b.stat.purdue.edu) who takes an extreme Bayesian view,
which internally is quite consistent as a piece of axiomatic
mathematics, but which accords ill with my intuition when
I go back to the basics of semantics, measurement, and the
modelling of phenomena, uncertain in general, which after all
is what the whole probability/inference enterprise is all
about. Professor Rubin wrote:
: This is where the problems arise. Types of probability can, and should,
: be mixed.
I disagree. The Bayesian practice of treating uncertainty
in universals (models of phenomena) symmetrically with
particulars (instances of the very phenomena sought to be
modelled) is misguided in my view. The two are at
different levels of indentation, as it were, with the
one being used to talk *about* the other, in the same
way that metalanguage and object language are at
different levels of indentation. This can lead to error,
as discussed in Thomas (1995).
: I would never consider probability as relative frequency. It is true
: that relative frequency is a probability measure, but this does not
: make it probability. It is also true that, if one has independent
: events with exactly the same probability, that relative frequency
: appraches that probability with probability one. From the physical
: world, the fundamental probability is for the single event which cannot
: be repeated.
I take the (perhaps simplistic) view that modelling is fundamentally
about counting and classifying within an assumed morphology
(objects and measureable attributes thereof, concerning which discourse
proceeds) for the phenomenon in question. Classification requires
measurement, and, ipso facto, observation. Counting may lead
to a probability hypothesis. But probabilities may never ever
be directly *observed* as a singular event or observation...
measured outcomes, yes (eg. "heads" or
"tails" on the toss of a coin, for a simple nominal-scale example,
but not P(Heads)=0.5). Singular events can never literally be
repeated, our universe being in perpetual motion. But what repeats
is the morphology we mentally construct around phenomena as
we seek to bring order (counting and classifying) to our observations.
It is this notion of morphology that bridges the gap between
frequency and subjective notions of probability. It is the notion of
morphology that provides the link between separate performances
of the same (frequentist) experiment as somehow being connected.
And at the bottom of any attempt at the estimation of the
subjective probability of a singular event, one will find a
morphology, implicit or explicit,
that governs our expectations regarding the singular event
that may be in question (eg. the probability, considered at the
onset of fighting, that the Gulf War would come to an end within
three months -- the Gulf War was unique, but the generals in
charge know a lot about the general phenomenon of war, allowing,
within an implicit morphology, a reasonable, though subjective,
estimate to be made. Even so, I would guess that the best the
generals could do was accord it a "high" probability.)
: As for mixing types of probability, what do you think the Bayesian
: approach, which has been present for more than two centuries, is?
Indeed, but 20th century classical approaches rest on a *considered* rejection
of the Bayesian approach. It is not reasonable that Bayesian
subjectiveness should *always* intrude. Yet, the essential
truth and attractions of the Bayesian approach are clear:
first it provides a direct characterization of uncertainty
in model parameters of interest which may bear on practical
decisions which must be made; second it allows readily for arbitrary
transformations of such uncertainty, in particular loss functions
which provide a measure of merit in possible real-world
outcomes; and third, it allows subjective belief to be factored
in where relevant. The classical approaches are quite
cumbersome in these regards. These advantages may now, however, be as
readily secured using the extended likelihood or possibility
calculus which emerges from the fresh semantics provided by
fuzzy set theory, and without having to put up with the
objectionable aspects of the Bayesian approach, to which I
have alluded.
: --
: Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
: hrubin@stat.purdue.edu Phone: (317)494-6054 FAX: (317)494-0558
Again,
S. F. Thomas