I tend very much to agree with this view, but I would like to add a
wrinkle which gets at the question of paradigm which lies at the
bottom of the whole debate. Yes, in some sense Prof. Rubin and Prof.
Jefferys are correct when they say that the null hypothesis is
always false, but the correctness of the statement rests upon the
tacit premise of the impossibility of ever achieving the measurement
literally of an infinitesimal point correct to an infinite number of
decimal places. Hence if we proceed from a paradigm literally of point
measurement, as current theories of statistical inference do, we are
led to the kind of paradoxical statement that has been the subject
of this thread. But the paradox is less one of semantic necessity, than
one of empirical reality, to be understood more as poetic truth than
as mathematical theorem. And in the empirical domain, even there the
paradox disappears if we simply accept the fact, as Michael Cohen
has done, that the paradigm of point measurement is at best a
convenient idealization. He articulates the first approximation in
relaxing the idealization, namely adopting a quantization (0.25 inches
in his example) over the continuous attribute (height) being measured.
The paradigmatic change that I would offer extends that basic
insight to the further acknowledgment that the quantization attempted
also cannot be exact, and the sharp boundaries implicit in the formulation
> H0: 68.75 inches < mu < 69.25 inches versus
> H1: mu < 68.75 or 69.25 inches < mu
also are an ideal fiction. The paradigmatic change that is necessary--
the second approximation which in a sense is merely a variant of
Cohen's discretization stratagem (the first approximation)--is
to view measurement as being *fuzzy* in general. It is in this way
that it is possible to mediate between the underlying attribute,
such as height, which may remain strictly continuous, and the
practical measurement of it, which must be fundamentally discrete,
and as well cannot (practically) achieve the crisp quantization
suggested by Cohen. Every discrete measurement report of which a
measuring device may be capable--eg. the ruler on my desk has 301
discrete markings corresponding to lengths from 0 to 30cms. with a
quantization interval of 1mm between the markings--then stands for a
cluster or fuzzy subset of points, fuzzily rather than crisply
specified, since imperfections in the instrument and in its use
will inevitably conspire to fuzzify the quantization boundaries
that ideally we should want.
I like to think of the reals as a labelling scheme,
one that in principle allows an infinitesimal point in a continuum
to be uniquely labelled, distinguishing it from every other point...
but that this labelling scheme is more an accomplishment of the
imagination than an accomplishment that may be realized practically.
It cannot be realized practically, because to do so would require
an unending sequence of words (digits) to so characterize at least
the non-integer and non-rational points in the continuum. In a finite
life, we do not have the time. Therefore, inevitably, we approximate
reals--infinitesimal points--by pairs of rational numbers, one
constituting a lower bound, the other an upper bound, and we may
choose the level of precision that we want, or that our measurement
method permits. This is an important notion for the fuzzy set theory,
because, seen in its most abstract generality, fuzzy set theory
allows us to mediate between the point idealization of empirical
data as belonging to the reals, and the fuzzy reality that empirical
data are in general *clusters* of reals, moreover clusters for which
the boundaries are not as crisp as the bracketing implied by
use of rational lower and upper bounds. Thus, as applied to
measurement in general, and thence to issues of statistical and
other inference, fuzzy may be seen as bringing the reals back from
the level of mere idealization, important though that is, to the
empirically realizable (no pun intended). The membership function
which characterizes a fuzzy subset is what mediates between the
discretely many labels in a labelling-set--measurement being a
particular case, as with the ruler on my desk--and the continuously
many points in the underlying continuum of the abstract attribute
set sought to be practically measured.
When the paradigm is changed from one of point measurement,
to one of fuzzy measurement in general, not only does the "paradox"
become moot--the null *point* hypothesis would still in general,
but only as a practical empirical matter, always be false, while
the null *interval* or null *fuzzy* hypothesis would *not* in
general be false, at least not a priori--but it would also become
clear that null hypotheses may in general be dispensed with as
a tool of analysis. Within a reformulated theory, proceeding from
within a different paradigm where fuzzy measurement replaces point
measurement as the general case, it is possible also to characterize
*directly* the uncertainty in any unknown parameter or parameters
of interest, also arbitrary function transformations of parameters
(Thomas, 1995). Statistical sampling itself becomes in effect
a process of "measurement" (over parameter space), the precision of
which varies in an obvious manner with sample size.
> --
> Michael P. Cohen home phone 202-232-4651
Regards,
S. F. Thomas