: S. F. Thomas (sthomas@decan.com) wrote:
: [ An extremely long posting about the latest attempt of the computer ]
: [ science and AI communities to avoid having to understand the ]
: [ foundations of statistical inference. Much of this posting is, ]
: [ IMVHO, ill-concieved, but, unlike Thomas, I do not have time to ]
: [ address all points in detail. However, I will focus on one aspect ]
: [ which illustrates the lack of understanding of basic statistical ]
: [ knowledge the AI community is lacking. ]
The unkind, ad hominem cuts are of course unworthy, and I ignore them.
However, let's see if there is some substance to the bluster...
: : I take the (perhaps simplistic) view that modelling is fundamentally
: : about counting and classifying within an assumed morphology
: : (objects and measureable attributes thereof, concerning which discourse
: : proceeds) for the phenomenon in question. Classification requires
: : measurement, and, ipso facto, observation. Counting may lead
: : to a probability hypothesis. But probabilities may never ever
: : be directly *observed* as a singular event or observation...
: : measured outcomes, yes (eg. "heads" or
: : "tails" on the toss of a coin, for a simple nominal-scale example,
: : but not P(Heads)=0.5). Singular events can never literally be
: : repeated, our universe being in perpetual motion. But what repeats
: : is the morphology we mentally construct around phenomena as
: : we seek to bring order (counting and classifying) to our observations.
: : It is this notion of morphology that bridges the gap between
: : frequency and subjective notions of probability. It is the notion of
: : morphology that provides the link between separate performances
: : of the same (frequentist) experiment as somehow being connected.
: Only a frequentist thinks of "repeated observations".
I would have thought that Bayesians also were (in part) frequentist,
since the likelihood function, which derives from frequentist
probability models, is central to the updating of Bayesian prior
belief.
: For a Bayesian, the
: link is provided by consideration of symmetry and invariance, the
: strongest form of which is given a name: "Exchangeability". This is the
: notion that beliefs over a collection remain invariant under an
: arbitrary permutation of elements of the collection. Such a notion leads
: to a representation for the collection which separates belief into
: belief about the "underlying true value", and "individual vatiation".
: Weaker forms of symmetry often lead to similar representations. It is
: these representations which allow understanding of underlying processes,
: and allow future prediction.
This reminds me of the software programmer who attempts to pass off
a bug as a feature. My objection to the Bayesian schema is logically
prior to the Bayesian paradigm itself. It is no defense of the
Bayesian paradigm to argue from within the paradigm. It is
an impregnable position, yes, but in the same way that defenders
of solipsism are immune from attack. Subjectivism is a mighty
fortress indeed, and remains the essential stratagem of de Finetti,
Savage and the other neo-Bayesian authors of the 20th century. I
certainly would not attempt to defeat Bayesianism on such
ground, and I readily concede that as a work of axiomatic mathematics,
the neo-Bayesian edifice is both impressive, and internally
consistent. Where I have difficulty is making that edifice
correspond to reality as I apprehend it. Subjective prior belief,
no matter how sophisticated the mathematics, or how clever the
trotting out of such concepts of "exchangeability", remains an
artful dodge where the central problem of inference is concerned,
which is, how to characterize what *the data* say about some unknown
probability distribution of interest. Throwing in a statistician's --
or a "user's" -- prior belief into the mix, is still, in my opinion,
sidestepping the real question. And arguing for the *necessity*
of so doing, not to mention the *goodness* of so doing, is the
Bayesian equivalent of turning a bug into a feature.
: There are two fundamental misconceptions at the heart of fuzzy theory.
: The first is that uncertainty can be adequately understood without the
: notion of probability.
Maybe, or maybe not. I would leave it to the proponent to show.
Reality stays the same. Paradigms come and go.
: Probability theory exists precisely as a language
: for the understanding of uncertainty.
Uncertainty of a certain kind, however.
I happen to agree that the attempt to assert fuzziness as something
totally unrelated to probability is ultimately misguided. But
that is not the same thing as saying that fuzzy *is* probability.
As I have argued, here and elsewhere, there is rather a sort of
duality linking fuzzy and probability, in exactly the same
way that likelihood and probability are distinct, but related
concepts. In any case, fuzzy and probability do not necessarily
compete, except when both retreat into a sort of solipsistic
subjectivism, where everything becomes a matter of personal taste.
: The other is that such
: understandings do not have to be fundamentally subjective. Frequentist
: statisticians have been trying to be "objective" for decades, and the
: literature is littered with examples of it's abject failure. I will end
^^^^^^^^^^^^^^
: this post with a quote by a man who understood uncerainty better than
: anyone had ever done before....
You make assertions, or rather dogmatic statements of Bayesian
doctrine. But you do not make rational arguments to which one could
respond. Surely, not even a Bayesian would deny that observing
heads or tails on the toss of a coin is essentially an "objective"
procedure, whatever one's prior belief might be as to which
might turn up?
: ... There is no way, however, in which the individual can avoid the burden
: of responsibility for his own evaluations. The key cannot be found that
: will unlock the enchanted garden wherein, among the fairy-rings and the
: shrubs of magic wands, beneath the trees laden with monads and noumena,
: blossom forth the flowers of PROBABILITAS REALIS. With these fabulous
: blooms safely in our button-holes we would be spared the necessity of
: forming opinions, and the heavy loads we bear upon our necks would be
: rendered superflous once and for all.
: Bruno de Finetti
: Theory of Probability, Vol 2
The essential stratagem revealed -- making of a bug, a feature.
I do not deny the importance of subjective belief in certain
situations. But to take this ounce of truth, and to make of it
the whole inferential meal, is, again, sidestepping the essential
problem of inference, which is to characterize what *the data*
say.
: --
: Darren Wilkinson - E-mail: d.j.wilkinson@durham.ac.uk
: <a href=http://fourier.dur.ac.uk:8000/djw.html>WWW page</a>
Regards,
S. F. Thomas