Re: Thomas' Fuzziness and Probability

From: S. F. Thomas (
Date: Thu Aug 23 2001 - 00:52:39 MET DST

  • Next message: "Re: Can a fuzzy model be made from an incomplete set of variables?" (Herman Rubin) wrote in message news:<9ljf27$>...
    > In article <>,
    > S. F. Thomas <> wrote:
    > >I see your difficulty. You think that if A is a fuzzy term, and its
    > >membership function is denoted simply by a, let's say, then the
    > >one-minus rule of negation gives the membership function of NOT A as
    > >1-a. Hence the "middle" is included, so to speak, and LEM and LC
    > >should fail, as indeed it obviously does if the min-max rules are then
    > >applied. For we have A AND NOT A being modeled in the meta-language as
    > >min(a,1-a), which gives us the well-known middle with a peak at 0.5
    > >(assuming of course that a has its max at 1, its min at 0, and there
    > >is gradation in-between).
    > >Now let's try another rule of conjunction, in particular the
    > >Lukasiewicz bounded-sum rule, for which we have for two membership
    > >functions a and b, and their corresponding terms A and B,
    > > mu[A AND B] = a AND b = max(0, a+b-1).
    > >In the particular case where B is NOT A, and b=1-a, we have under this
    > >rule
    > > a AND b = max(0,a+1-a-1) = 0 everywhere
    > >and in accordance with the law of contradiction, the term A AND NOT A
    > >is rendered as the comstant absurdity whose membership value is
    > >everywhere 0. LC is upheld.
    > And what if B is A? Do we not want A AND A to be A?
    > The rule fails! ANY rule for computing truth values
    > for compound propositions from those simple ones is
    > not going to do all that is desired if the truth value
    > system is not a Boolean algebra.

    Perhaps so. In any case, I was not proposing Lukasiewicz as a general
    replacement for min-max. It depends precisely on the relationship
    between the two halves of the compound proposition. The rule that I
    propose is a linear combination as follows, in which t is a semantic
    consistency coefficient that depends on a and b, in specific on the
    correlation coefficient between the two:

                      { t*min(a,b) + (1-t)*a*b, if 1 >= t > 0
            a AND b = { a*b if t = 0
                      { (1+t)*a*b -t*max(0,a+b-1), if -1 <= t < 0.

    Thus when t = 1, there is positive semantic consistency, and the min
    rule applies. When t = 0, there is semantic independence and the
    product rule applies. And when t = -1 , there is negative semantic
    consistency, and the Lukasiewicz rule applies. The function relating
    semantic consistency and the degree of correlation between the two
    membership functions a and b is such that when the correlation, c say,
    is 1, then t also is 1. When c=0 then t=0, and when c = -1, t = -1.
    However there is no suggestion that there is a simple linear relation
    between t and c. In fact, I would expect that as an empirical matter,
    there is some threshold value, h say, for c which would give us
    effectively t=1 when c > h, t = -1 when c < -h, and t = 0 when -h <= c
    <= h. Now, in the particular case where b=a, it is clear that c=1 and
    therefore t=1 also, and the rule that then effectively applies under
    the generalized formula above is the min rule, and indeed, we obtain a
    AND a = a, as empirical semantics would seem to require.

    > BTW, rather large
    > Boolean algebras as truth values are ways of obtaining
    > some of the highly counterintuitive results of set theory.
    > Probability measures exist on Boolean algebras, but the
    > sloppy current language of considering probabilities as
    > measures of truth (which they are) does not make them
    > truth values.
    > Any consistent scheme for allocating odds for some bets
    > and conditional bets can be extended to a probability
    > measure. If fuzziness is consistent, it can be as well.

    If one insists on the axiomatization that would be required. That is
    effectively how likelihood becomes probability. Clearly, you can take
    a membership function, and rescale it so that it integrates to unity,
    then you can say well it now represents "degree of belief" and the
    probability calculus can be brought to bear. One can do this as a
    matter of formal mathematics, but I don't think such an axiomatization
    is justified when reflected back onto the underlying semantics. I need
    hardly mention that indeed this has been the central and continuing
    problem of Bayesian inference -- the lack of justification.

    S. F. Thomas

    This message was posted through the fuzzy mailing list.
    (1) To subscribe to this mailing list, send a message body of
    "SUB FUZZY-MAIL myFirstName mySurname" to
    (2) To unsubscribe from this mailing list, send a message body of
    (3) To reach the human who maintains the list, send mail to
    (4) WWW access and other information on Fuzzy Sets and Logic see
    (5) WWW archive:

    This archive was generated by hypermail 2b30 : Thu Aug 23 2001 - 00:56:37 MET DST