BISC: What is the right way to do reasoning?

From: Masoud Nikravesh (
Date: Thu Jan 18 2001 - 07:42:09 MET

  • Next message: "BOOK ANNOUNCEMENT"

    Berkeley Initiative in Soft Computing (BISC)

    To: "Paul Prueitt" <>, <>
    From: Paul Werbos <>
    Subject: What is the right way to do reasoning?

    In some of these lists and elsewhere, I have seen some very heated debates
    about the question: "What is the right way to do reasoning?" Some people
    have argued that "John Stuart Mill" reasoning as promulgated by Finn's
    school in Russia
    is the right way. Some people have argued that fuzzy logic, or various forms
    of fuzzy logic, are the right way.
    And much of the hard-core AI mainstream now seems to assume that Bayesian
    networks are the right way.

    But: I haven't seen any discussion about whether the question itself is
    **DOES THERE EXIST** a right way to do reasoning? Maybe we really need to
    about the question itself a bit more, before we can hope to achieve more
    agreement on its

    In the area of subsymbolic intelligence, we have some reason to believe that
    a "right answer" exists, to some extent.
    The higher-order capabilities of the mammal brain clearly have some kind of
    universality to them.
    Yes, the human and even the monkey may have some important capabilities that
    other mammals do not...
    but these capabilities are based on a fundamental brain structure -- the
    six-layer cortex, the basal ganglia
    loops, the limbic system -- which are universal to all mammals. Mammals in
    turn have a pretty clear dominance
    across all the task environments found on earth -- except for niches where
    nature cannot afford to feed such a big brain,
    or where mammals are limited by stuff like lungs rather than brains.

    So: there may be a kind of right answer to the problem of "subsymbolic
    reasoning," if you want to call that reasoning.
    It's a good area for research. But what about real "symbolic" or semiotic
    reasoning, where we manipulate
    abstract symbols in order to derive conclusions?

    There is absolutely no reason to be found in nature to justify the belief
    that there EXISTS a right way to do symbolic reasoning. OK, there should be
    A way to do symbolic reasoning which can reproduce the analytic and logical
    powers of a small mouse. But that isn't what people usually claim to be
    doing when they work with symbolic reasoning.

    But what evidence do we have that effective symbolic reasoning is even
    possible, let alone that there is a best way to
    do it?

    Our evidence really starts at the time of Aristotle and Plato and such...
    the first time that people really started to do
    the formal, explicit rule-based kinds of manipulations we call "syllogisms."
    Languages from before that time
    mostly do not have a "propositional" structure at all... they mainly convey
    word-movies. People enagged in a kind
    of reasoning before PLato, of course... but it would mainly involve...
    hearing a few sentences... coming up with images
    or impressions of what MIGHT be going on in the world... and then maybe
    ARTICULATING the images
    in one's mind. The "reasoning process" was more a subsymbolic process than
    anything else. This kind of reasoning
    is still absolutely critical to the ost advanced work in science... it is
    not the whole thing, but it is a major part.

    But: we know, of course, that our knowledge of the world cannot be easily
    reduced to just a few yes-no bits.
    The logic developed by Aristotle and PLato may or may not be "universally
    valid" in some sense -- but we
    certainly know that it is not a complete recipe for symbolic reasoning,
    because reality involves a lot
    of continuous variables, not just binary variables. It is not "THE" right
    way to do reasoning, because it
    is not powerful enough to express many concepts which we can express in
    natural language which add a lot more power to symbolic reasoning.


    Now: we know that humans brains COULD NOT possible have built into them "THE right way" to do symbolic reasoning, **IF** there is a right way to do symbolic reasoning. The reason we know this is very straightforward. Humans have only been DOING true symbolic reasoning for a few thousand years -- not enough time for the very profound changes in brain structure that would be required to do "right reasoning" in a hard-wired way.

    But: does this mean that humans are the missing link between the mouse level of intelligence and the kinds of brains that have **right*** symbolic reasoning built in? Or is it possible that right symbolic reasoning simply does not and cannot exist? Looking only at humans and their accomplishments, we cannot tell. It could be that reasoning systems are like the shapes of hands -- that all of them work sometimes, and not at other times, and that there are no universal principles... just a motley collection of ad hoc possibilities, without even an orderly "metaprinciple" to say what works when and why. So just as lots of mammals have different kinds of hands, maybe different kinds of reasoning systems could proliferate aimlessly... and uselessly... except as constrained by society's need to limit expenditures in this area. Some folks in the reasoning business might consider this a reasonable picture of what is actually going on...


    But... maybe, maybe not. The previous paragraph sounds a bit like the extreme "no free lunch" pessimism, which is fundamentally wrong as we have discovered in the neural network field.

    Could there be some basis in mathematics for believing that a "right way to reason" exists?

    The hard-core Bayesians would say that much of our symbolic knowledge really reduces to a set of logical yes-no propositions. (Certainly a lot of AI essentially carries no more power than that.) And so, for parsimony and manageability, there is really no alternative but trying to find sets of propositions which obey the usual assumptions of conditional independence. In this case, the problem of reasoning reduces to the problem of how to START from a set of conditional probabilities and an independence assumption, and then DEDUCE something from those assumptions, using the laws of probability.

    This hard-core approach is not so unreasonable. They tell me that Michael Jordan's book on the reasoning process contains a lot of vey important new results and emerging research. In this view, all the hard reasoning problems end up being NP-hard sorts of problems; practical "reasoning" methods are seen to be APPROXIMATION methods.. methods for approximating the implicit exact results of probability theory. These results, in turn, are based on a notion of propositions forming a kind of relational network or graph, in which 'truth" or "falsehood" is actually a field of binary variables defined over these propositions, forming a Markhov random Field (MRF). That sounds a bit formalistic -- but it is very important in the real world, because there does exist a nice universal mathematics of MRFs.

    But: even in this view...

    alternative reasoning systems may be thought of as alternative APPROXIMATION schemes. Perhaps there is a 'best' approximation scheme -- but perhaps there would be more progress if the advocates of new reasoning systems could explain their systems as approximation schemes, and, in the process, give some tangible basis for ehlping us understand which approximations (which reasoning schemes) work better under what conditions. It could get to be like atomic physics, where there still exists a menagerie of approximationand numerical tecnqiues, but people don't quite make religions out of them. The Bayesian approach may not give us the "right" way to do reasoning, but it may give us a framework for "secularizing" this religious debate.

    As an example... I wonder how many "plausible reasoning" and even Dempster-Schafer kinds of schemes MIGHT be redefined as "robust" analysis technisues for Bayesian networks and MRFs. By "robust," I mean "robust" as in John Doyle of CalTech, the school of thought which uses a more rigorous form of interval arithmetic, in effect, to help us deduce key properties of systems we cannot solve for exactly. The fuzzy logic people and the hard-core H-infinity people don't opften talk to each other, but maybe they would discover they could learn something if they did. Could one imagine a John Doyle defending, say, George Klir or Victor Finn against criticisms from mainstream AI people, as a result of such an approach to reasoning? Maybe. And it could be done WITHIN the Bayesian framework...


    As with Aristotle... the Bayesian framework describes an important and broad classs of reasoning problems, very well and very exactly... but it is incomplete. And, for the same reason, it cannot claim to be "THE" approach to symbolic reasoning.

    There are two most important sources of incompleteness:

    (1) the lack of central representation of continuous variables;

    (2) the lack of consideration fo what STATISTICIANS call "robustness."

    Thus, to develop a better undertsnading and synthesis of different reasoning systems, we must separately consider how they could be applied (or rationalized) for the Bayesian network class of problem, and how they fit into various explicit efforts to address these two extensions.

    For continuous variables, it is obvious that fuzzy logic provides a useful extension beyond conventional AI. If we HAVE to use words like "hot" to talk about temperature, it is obvious that a continuous membership function gets us closer to reality (and to a parsimonious discussion of what we see in nature) than a membership which only allows the values "yes" and "no" -- the special case which conventioanl AI restricts itself to. Differential equations may be better still, in some ways, but the issue is how to do the best one can using a vocabulary of words, before one is really ready to think entirely in differential equations.

    But: it is not obvious that there exists a "best" way to actually DO fuzzy reasoning! Yes, there are many ways whihc have been proposed... but is there any basis for imagining any one of them (discovered or undiscovered) could be the "best" in a universal sort of way? I don't know of such a basis. But does someone else on this list know of such a basis?

    As for robustness in the sense of statistics....

    I find it somewhat amusing and somewhat sad that many people working in neural networks are just now discovering maximum likelihood and Bayes' Law as ways of reasoning or doing statistics. Way back in 1973, the very first actual implementation of backpropagation... I used backpropagation, at first, to do maximum likelihood estimation/identification of a class of time-series models which the adpative control people still get confused about. (The vector mixed autoregressive moving average models... which are nothing like the simplified "ARMA"models that adaptiev control people usually talk about..) What caused me great pain... as a true-believing student of Howard Raiffa... was the discovery that that kind of full-fledged Bayesian approach really did not work as expected in real-world data analysis problems.

    It can be faked... there is a big market out there in Wall Street for fake models that do nothing useful but provide good window-dressing for IPO frauds ... but for real forecasting accuracy, it does not work as advertized.

    Philsophically, the Bayesian story is very compelling. If you have competing theories, theory1... theoryN, then Pr(theoryK|data) is given by Bayes' Law. What could do better than the theory which has the maximum probability of being actually true?

    But in the real world, simple 10-equation or 1000-equation eocnometric style statistical models cannot possibly be TRUE in the complete, objective sense assumed in physics. The issue is not "truth." The issue is not "truth," but the ability of a model to soemhow REPRESENT a more complex hidden reality without going too far afield. This is the concept of robustness formulated by famous statisticians like Mosteller and TUkey long ago (even though they only pursued a few small aspects of the issue): teh issue of a model's ability to yield USEFUL results, EVEN IF NOT TRUE. Some Bayiesains would say "oh, it's just a matter of minimizing a loss function instead of an error or entropy function," but it's not nearly that simple. In fact, a THEORETICALLY correct formulation... might be said to involve an INTEGRATION of expected loss functions over the infinity of all possible huge complex theories, uisng Bayes' Law to assess THEIR probabilities... I have seen a few exercises by Andrew Barron of Yale which have sme of that flavor... but AGAIN, it ends up being an NP-Hard problem!

    So: if one demands ROBUSTNESS... one ends up with a kind of NP-hard situtaion, much worse than "simple" Bayes network reasoning... and that ends up telling us, again, that we need to think in terms of competing APPROXIMATION schemes as well as competing priors.

    So... I guess that's long enough. Let me note that my PhD thesis on backpropagation also described some more robust alternatives to ARMA estimation, which work far better in real-world time-series estimation than maximum likelihood. The thesis was reprinted in whole in The Roots of Backpropagation, Wiley, 1994, by myself. (Note the acknowledgement to Mosteller, who was one of the key people in helping me grapple with the unexpected lessons of this experience.) "Post-Bayesian" foundations and time-series estimation are discussed in much greater detail in chapter 10 of the Handbook on Intelligent Control, White and Sofge eds, 1992 -- which starts out by examples from chemical engineering where it is critical to get beyond maximum llikelihood. Nemanman (sp?) of NEC research may have some new ideas about prior probabilities which add at least something new to where we were back in 1992. (Same web page as Lee Giles, NEC Research of NJ).

    Some of Lotfi's ideas about fuzzy reasoning in image processing could be interpreted in part, in Bayesian terms, as statements about prior probabilities which could be evaluated more analytically, if there were better collaborations between fuzzy logic, Bayesian analysis and image processing.


    Best of luck, and thanks for your patience...


    P.S. There is also a cynical but plausible view that "reasoning" is like mathematics ... a subject that will always be inherently diverse and complex, and not amenable to direct hard-wiring at all... that real intelligent systems must be something like self-conscious, subsymbolic empathic learning systems, in which the rules of true symbolic reasoning are always learned, tentative and incomplete, forever. The mammal brain might not be all THAT far from the end of the line, in terms of a universal foundation...

    To: "Paul Prueitt" <>, <> From: Paul Werbos <> Subject: What is the right way to do reasoning?

    -- Dr. Masoud NikRavesh Research Engineer - BT Senior Research Fellow Chair: BISC Special Interest Group on Fuzzy Logic and Internet

    Berkeley initiative in Soft Computing (BISC) Computer Science Division- Department of EECS University of California, Berkeley, CA 94720 Phone: (510) 643-4522 - Fax: (510) 642-5775 Email: URL: -------------------------------------------------------------------- If you ever want to remove yourself from this mailing list, you can send mail to <Majordomo@EECS.Berkeley.EDU> with the following command in the body of your email message: unsubscribe bisc-group or from another account, unsubscribe bisc-group <your_email_adress>

    ############################################################################ This message was posted through the fuzzy mailing list. (1) To subscribe to this mailing list, send a message body of "SUB FUZZY-MAIL myFirstName mySurname" to (2) To unsubscribe from this mailing list, send a message body of "UNSUB FUZZY-MAIL" or "UNSUB FUZZY-MAIL" to (3) To reach the human who maintains the list, send mail to (4) WWW access and other information on Fuzzy Sets and Logic see (5) WWW archive:

    This archive was generated by hypermail 2b30 : Thu Jan 18 2001 - 11:10:08 MET