From: Ellen Hisdal (
Date: Tue Aug 29 2000 - 16:52:23 MET DST

This is in response to Lotfi Zadeh's pessimistic view (see copy at the
end of this letter) concerning the possibility of precise
definitions of, e.g., causality or statistical independence.
The view expressed here is slightly less pessimistic.
Statistical dependence and independence CAN be precisely defined.

The case of causality is more complicated, but it is often possible to
say with certainty that an event E1 is NOT the cause of an event E2.


Let E1, E2 be two events occurring at times t1, t2 respectively.
And let d be the distance between the points at which E1 and E2 occur.

To avoid relativistic difficulties, we assume that all coordinates and times
are measured in the same coordinate system.

We can then say the following:

If t1>t2 (i.e. if E1 occurred later than E2), then E1 is not the cause
of E2.

More specifically, if d/(t2-t1) > c, where c is the velocity of light,
then E1 cannot be the cause of E2 (because no signal can be sent with
a velocity exceeding c).

So although we have not given a general operational definition of causality,
we HAVE given a criterion, which can be used in the case in which
d/(t2-t1) > c, to say that E1 has NOT caused E2.


Statistical independence was used in Lotfi's first edition of his email
letter as an example of a concept which is difficult to define.
Statistical independence CAN be defined precisely as we show below.

Statistical dependence is interesting in the present context
because it is often confused with causality. Such reasoning is not correct.
Two events E1, E2 may be statistically dependent because both
of them have a common cause E0. But this does not mean that E1 is the
cause of E2 or vice versa.

For example, lack of rain, (E0), may cause my rose bush to dry out (E1),
as well as that of my neighbor (E2).
This does not mean that the drying out of MY rose
has caused the drying out of my neighbor's rose. However, the two
events E1, E2 ARE statistically dependent.

The general, precise definition of statistical dependence is the following:

Let A, B be two random variables which can take on values in the domains
{a1,...,ai,...aI} and {b1,...,bj,...,bJ} respectively.

Then ai is said to be statistically independent of B if and only if
has the same value (namely prob(ai)) for all the J bj's, bj=b1,...,bJ,

           prob(ai|bj)=prob(ai) for all bj's. (1)

If the above holds for all the I ai's, ai=a1,...,aI,
then the random variable A is said to be statistically independent of B.

The well-known formula

           prob(ai,bj)= prob(ai) prob(bj) (2)

for the joint probability of occurrence of ai AND bj when A and B
are independent random variables then follows from the law
(mathematical definition) of compound probabilities,

          prob(ai,bj)= prob(ai) prob(bj|ai) (3)

and equation (1).


The above deliberations accentuate the difference between causality and
statistical dependence. The definition of the physical concept of causality
always involves the time difference between the two events as well as
the velocity of propagation of a signal between them. In contrast,
time difference and velocity of propagation are not needed to define the
purely mathematical concept of statistical dependence.

A causality relationship between two events E1, E2 will always give rise to
a certain degree of statistical dependence between them. (E1 corresponds to
an outcome ai of A, E2 to an outcome bj of B).

The converse is not true. A statistical dependence between two events may,
but need not, indicate a causality relationship between them.

An example of statistical dependence AND causality is
E1 = lack of rain,
E2 = drying out of my rosebush.

An example of statistical dependence WITHOUT causality is
E1 = drying out of my rosebush,
E2 = drying out of my neighbor's rosebush.

                 - - - - - - - - - - - -

A copy of Lotfi Zadeh's letter follows:

> Envelope-to:
> Delivery-date: Wed, 23 Aug 2000 21:42:08 +0200
> Date: Wed, 23 Aug 2000 12:03:48 -0700 (PDT)
> From: "Michelle T. Lin" <michlin@EECS.Berkeley.EDU>
> Mime-Version: 1.0
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: 7bit
> Content-MD5: TGnnr6qpEPQxu0s3QzK2tQ==
> Sender: owner-bisc-group@EECS.Berkeley.EDU
> Precedence: bulk
> *********************************************************************
> Berkeley Initiative in Soft Computing (BISC)
> *********************************************************************
> To: BISC Group
> From: L. A. Zadeh <>
> The following is a corrected version of the previous sent abstract.
> ----
> Toward the Concept of Generalized Definability
> Lotfi A. Zadeh*
> (Abstract of a lecture presented at the Rolf Nevanlinna Colloquium,
> University of Helsinki, Helsinki, Finland, August 8, 2000)
> Attempts to formulate mathematically precise definitions of
> basic concepts such as causality, randomness and probability have a
> long history. The concept of generalized definability which is
> outlined in this lecture suggests that such definitions may not exist.
> Furthermore, it suggests that existing definitions of many basic
> concepts, among them those of stability, statistical independence and
> Pareto-optimality, may be in need of redefinition.
> In essence, definability is concerned with whether and how a
> concept, X, can be defined in a way that lends itself to mathematical
> analysis and computation. In mathematics, definability of mathematical
> concepts is taken for granted. But as we move farther into the age of
> machine intelligence and automated reasoning, the issue of
> definability is certain to grow in importance and visibility, raising
> basic questions which are not easy to resolve.
> To be more specific, let X be the concept of, say, a summary,
> and assume that I am instructing a machine to generate a summary of a
> given article or a book. To execute my instruction, the machine must
> be provided with a definition of what is meant by a summary. It is
> somewhat paradoxical that we have summarization programs which can
> summarize, albeit in a narrowly prescribed sense, without being able
> to formulate a general definition of summarization. The same applies
> to the concepts of causality, randomness and probability. Indeed, it
> may be argued that these and many other basic concepts cannot be
> defined within the conceptual framework of classical logic and set
> theory.
> The point of departure in our approach to definability is the
> assumption that definability has a hierarchical structure.
> Furthermore, it is understood that a definition must be unambiguous,
> precise, operational, general and co-extensive with the concept which
> it defines.
> The hierarchy involves five different types of definability.
> The lowest level is that of c-definability, with c standing for crisp.
> Thus, informally, a concept, X, is c-definable if it is a crisp
> concept, e.g., a prime number, a linear system or a Gaussian
> distribution. The domain of X is the space of instances to which X
> applies.
> The next level is that of f-definability, with f standing for
> fuzzy. Thus, X is a fuzzy concept if its denotation, F, is a fuzzy set
> in its universe of discourse. A fuzzy concept is associated with a
> membership function which assigns to each point, u, in the universe of
> discourse of X, the degree to which u is a member of F. Alternatively,
> it may be defined algorithmically in terms of other fuzzy concepts.
> Examples of fuzzy concepts are small number, strong evidence and
> similarity. It should be noted that many concepts associated with
> fuzzy sets are crisp concepts. An example is the concept of a convex
> fuzzy set. Most fuzzy concepts are context-dependent.
> The next level is that of f.g-definability, with g standing
> for granular, and f.g denoting the conjunction of fuzzy and granular.
> Informally, in the case of a concept which is f.g-granular, the
> values of attributes are granulated, with a granule being a clump of
> values which are drawn together by indistinguishability, similarity,
> proximity or functionality. f.g-granularity reflects the bounded
> ability of the human mind to resolve detail and store information. An
> example of an f.g-granular concept which is traditionally defined as a
> crisp concept, is that of statistical independence. This is a case of
> misdefinition -- a definition which is applied to instances for which
> the concept is not defined, e.g., fuzzy events. In particular, a
> common misdefinition is to treat a concept as if it were c-definable
> whereas in fact it is not.
> The next level is that of PNL-definability, where PNL stands
> for Precisiated Natural Language. Basically, PNL consists of
> propositions drawn from a natural language which can be precisiated
> through translation into what is called precisiation language. An
> example of a proposition in PNL is: It is very unlikely that there
> will be a significant increase in the price of oil in the near
> future.
> In the case of PNL, the precisiation language is the
> Generalized Constraint Language (GCL). A generic generalized
> constraint is represented by Z isr R, where Z is the constrained
> variable, R is the constraining relation and r is a discrete-valued
> indexing variable whose values define the ways in which R constrains
> Z. The principal types of constraints are: possibilistic (r=blank);
> veristic (r=v); probabilistic (r=p); random set (r=rs); usuality
> (r=u); fuzzy graph (r=fg); and Pawlak set (r=ps). The rationale for
> constructing a large variety of constraints is that conventional crisp
> constraints are incapable of representing the meaning of propositions
> expressed in a natural language -- most of which are intrinsically
> imprecise -- in a form that lends itself to computation.
> The elements of GCL are composite generalized constraints
> which are formed from generic generalized constraints by combination,
> modification and qualification. An example of a generalized constraint
> in GCL is ((Z isp R) and (Z,Y) is S) is unlikely.
> By construction, the Generalized Constraint Language is
> maximally expressive. What this implies is that PNL is the largest
> subset of a natural language which admits precisiation. Informally,
> this implication serves as a basis for the conclusion that if a
> concept, X, cannot be defined in terms of PNL, then, in effect, it is
> undefinable or, synonymously, amorphic.
> In this perspective, the highest level of definability
> hierarchy, which is the level above PNL-definability, is that of
> undefinability or amorphicity. A canonical example of an amorphic
> concept is that of causality. More specifically, is it not possible to
> construct a general definition of causality such that given any two
> events A and B and the question, "Did A cause B?" the question
> could be answered based on the definition. Equivalently, given any
> definition of causality, it will always be possible to construct
> examples to which the definition would not apply or yield
> counterintuitive results.
> In dealing with an amorphic concept, X, what is possible --
> and what we generally do -- is to restrict the domain of applicability
> of X to instances for which X is definable. For example, in the case
> of the concept of a summary, which is an amorphic concept, we could
> restrict the length, type and other attributes of what we want to
> summarize. In this sense, an amorphic concept may be partially
> definable or, p-definable, for short. The concept of p-definability
> applies to all levels of the definability hierarchy.

                          - - - - - - -

Address, etc.:
       Ellen Hisdal | Email:
       (Professor Emeritus) |
Mail: Department of Informatics | Fax: +47 22 85 24 01
       University of Oslo | Tel: (office): 47 22 85 24 39
       Box 1080 Blindern |
       0316 Oslo, Norway | Tel: (secr.): 47 22 85 24 10
Location: Gaustadalleen 23, |
       Oslo | Tel: (home): 47 22 49 56 53

This message was posted through the fuzzy mailing list.
(1) To subscribe to this mailing list, send a message body of
"SUB FUZZY-MAIL myFirstName mySurname" to
(2) To unsubscribe from this mailing list, send a message body of
(3) To reach the human who maintains the list, send mail to
(4) WWW access and other information on Fuzzy Sets and Logic see
(5) WWW archive:

This archive was generated by hypermail 2b25 : Tue Aug 29 2000 - 17:35:38 MET DST