Re: What does overlapping m.f.'s really represent?

S. F. Thomas (
Mon, 5 Feb 1996 19:17:50 +0100

Ulf Nordlund ( wrote:
: In article <4ealkn$>, Martin Brown
: <> wrote:

: > If you add (OR) the membership functions you take the density (overlap)
: > into account, whereas when you use the max operator you don't. Both
: > procedures are common in fuzzy logic as well as a host of other operators
: > which lie somewhere in between.
: >

: Ulf (who posted the original question):

: I know the technical difference between the operators "max" and "+". The
: interesting question is: What is the fundamental "conceptual" dfference
: between these two procedures? I.e. what do they really mean? (We are not
: talking about two different types of uncertaity here. Or are we?)

Interesting question... also one that needs to be asked and answered.
Uninterpreted fuzzy set theory is ultimately unsatisfying, because
it offers tantalizing glimpses of a way of looking at one aspect of
uncertainty -- where standard probabilistic reasoning is not clearly
applicable -- but without fully laying out all of the correspondences
between the tantalizing theory offered, and the semantic reality
it purports to represent. It is by no means an easy undertaking to
lay out a fully interpreted fuzzy set theory as applied to something
as potentially self-referential as semantics. Nevertheless, without
the effort at interpretation, we are doomed to proliferating ad-hoc-ism
in the infinity of choices we have for something as basic as an OR

In "Fuzziness and Probability", such an effort is made,
and, among many other things, the conclusion is reached that there
*is* some unifying logic to the infinity of choices. Consider
two fuzzy terms A and B pertaining to a common universe of discourse
U ranged over by a variable u. At one extreme, the two terms may
be bound by considerations of positive semantic consistency (for
example, asserting "u0 is very tall" for some specific height
value u0 forces one out of positive semantic consistency also
to assert "u0 is tall"). At the other extreme, the two terms
may be bound by considerations of negative semantic consistency
(for example, asserting "u0 is short" forces one out of negative
semantic consistency also to assert "u0 is not tall"). Right
in the middle between these extremes, there is a point or a
zone where there is neither positive nor negative semantic
consistency constraining the use of language or the drawing of
semantic inferences, and the notion of semantic independence
applies (for example, where two different universes of
discourse are involved, height and weight say: clearly an assertion
involving the term "tall" need not constrain one semantically
in any assertion involving the term "heavy"). Starting from
postulates which are fully interpreted in terms of everyday
semantic rules with which we are all familiar, it is possible
to show that the max rule

a OR b = max(a,b)

applies when there is a strong positive semantic consistency
relationship binding the terms in question. The bounded-sum

a OR b = min(1, a+b)

applies at the opposite extreme when there is a strong negative
semantic consistency relationship binding the terms in question.
(These two rules are in fact not independent of each other,
being in fact intertwined through the negation postulate ~a=1-a.
One can be derived from the other within an elaboration of the
notion of semantic consistency without making very strong
assumptions. But that is left to one side.)

And the product-sum rule

a OR b = a + b - a.b

applies in the middle when there is semantic independence.

One can define, and easily interpret, the notion of
the semantic consistency coefficient between terms, ranging on
[-1,1], and dependent only upon the membership functions characterizing
the terms in question. If this semantic consistency coefficient
is denoted by t, then the generalized rule is

{ (1-t).[a+b-a.b] + t.max(a,b), if 0 <= t <= 1
a OR b = |
{ (1+t).[a+b-a.b] - t.min(1, a+b), if -1 <= t < 0

Note that when t= -1, which would be the case for any term
and its negation (strong negative semantic consistency) the
appropriate rule that drops out is

a OR ~a = min(1, a+(1-a) ) = 1

and the result is everywhere 1, quite unlike the max rule which
gives us

a OR ~a = max(a, 1-a)

which for any membership function that ranges "gradually" from
0 to 1 will yield a result which dips to 0.5 "in the middle"
where a = 0.5 = 1 - a, and is moreover discontinuous at that point,
violating at least *my* semantic intuition in the process.
(I was never able to accept that a membership function for
"tall or not tall" should not simply be unity everywhere, as
opposed to the weird shape -- according to me -- that results
from application of the max rule.)

When t=0, we of course have

a OR ~a = a + (1-a) -a.(1-a) = 1 - a + a^2

which, for a= 0.5 yields a value of 0.75, in-between the previous

Applying the foregoing to the question originally posed, using the
max rule in the "overlap" area implies a strong degree of
semantic consistency between the results involved, which may
well be belied by differing "central tendency areas", and
further by bimodality or multi-modality in the OR'ed results.
Using the bounded-sum rule allows for conflicting results,
but then embraces the entire area of ambiguity, leading to
lack of discriminability in the overall result. The product-sum
rule lies in-between. Of course, in a control application,
the designer typically defines the meaning of the terms used,
and only he/she knows what consistency assumptions are appropriate.

Hope this helps...

: Ulf

S. F. Thomas