: > [ Regarding subjectivity in Bayesian inference ]
: > The essential stratagem revealed -- making of a bug, a feature.
: > I do not deny the importance of subjective belief in certain
: > situations. But to take this ounce of truth, and to make of it
: > the whole inferential meal, is, again, sidestepping the essential
: > problem of inference, which is to characterize what *the data* say.
: But this problem also has a direct solution in the Bayesian framework.
: What the *data* say is characterized by the likelihood function. If
: that's all you want to know, just plot the likelihood and call it a
: day. This is a perfectly serious suggestion, by the way - a plot of
: the likelihood function may well be the best way of presenting the
: results.
The trouble though has always been how to manipulate the likelihood
function. How to eliminate nuisance parameters operating entirely
with the (multi-parameter case) likelihood function? How to obtain
likelihood functions characterizing some *function transformation*
of the parameter(s) whose likelihood function has been mapped.
I submit that it has been these deficiencies of the likelihood
calculus that served as the spur to the Bayesian enterprise in the
first place. What is achieved by the Bayesian schema after all
is nothing but a rationalization for treating the *likelihood* function
over parameter space exactly as though it were a *probability* distribution
over parameter space, with the necessary injection of the
prior -- nowadays subjective, once based on the "principle of
insufficient reason", among other vexatious approaches that
at one time or another over the space of 200 years have been
tried -- being essentially a nuisance, now elevated, in the neo-Bayesian
revival of the 20th century, by too-clever minds to the status of a virtue.
Then you marginalize with impunity to eliminate
nuisance parameters, using integration as your rule. And you
perform arbitrary changes of variable relating states of nature
to loss functions of interest. My contention, precisely, is that
an extended likelihood calculus that permits such operations
would have *all* the advantages of the Bayesian schema, but
without the *necessity* of bringing prior belief into
the picture. That option, though, would remain whenever it is
warranted.
: Bayesians with different priors then multiply the likelihood
: by their prior, normalize, and have what they want. Non-Bayesians who
: accept the Likelihood Principle are happy as well.
: Of course, some "minor" problems arise when the parameter is, say,
: twenty-dimensional, but that's life.
The problems are by no means minor, as you well know, at least
not within the limits of the "traditional" likelihood calculus.
Within the extended likelihood calculus that can be developed
making use of some of the fresh semantics of the fuzzy set
and possibility theory, however, the problems may at last be
successfully tackled. The fuzzy set theory must, however,
be reformulated in certain areas to achieve the essential
unification.
: I wouldn't agree that characterizing what the data say is the
: essential problem of inference, however. As a Bayesian, I'd say that
: the crucial step is formalizing your beliefs in the form of a prior
: distribution. I think that figuring out how to do this for complex,
: high-dimensional models is the most exciting aspect of statistics at
: present.
If you step out of the Bayesian paradigm for a minute, and contemplate
the possibilities opened up by an extended likelihood calculus that is
just as powerful as the probability calculus in the manipulations it
allows, I think the challenge (major, IMO) of formalizing prior beliefs in a
manner consistent with, eg. the total ordering axiom, would be seen
to be the unnecessary trap that it is, rather than the virtue
that is so often claimed. OTOH, it seems to keep many clever and
inventive minds very well occupied.
: Radford Neal
Regards,
S. F. Thomas