Some Thoughts on Lotfi's Position

Paul J. Werbos (
Fri, 18 Oct 1996 13:15:43 +0100


I would like to express agreement with part of what Lotfi expressed in that
last Abstract.
More important -- I think there is a very critical point he has made, whose
significance may
be greater than many people appreciate. This point has to do with fuzzy

As you might expect, I view these issues from a neural network perspective.
Thus, even though I agree that
these mathematical concepts which Lotfi has described are of critical
importance to real
intelligence -- somehow, I would like to know how neural networks in the
brain (or artificial replicas) could actually
implement those concepts. But I recognize that the concepts as such are

This issue of chunking which Lotfi has alluded to is of critical importance
to large scale intelligent control or decision-making systems.

At present, in the neural network and AI fields, many people (jncluding me)
are now very excited by
the power of "reinforcement learning" methods. But the methods which are
most popular today have very little
ability to handle the kinds of difficult large-scale problems involved in
real intelligent decision-making.
It is true that Tesauro has somehow extended them to play a good game of
backgammon -- and someday I need to study the tricks he uses, which are
very important. But in simple engineering applications -- with just a
handful of continuous variables -- these lower-level reinforcement learning
designs learn too slowly to be of practical use; that is why Haykin's
textbook (like many other empirically-based engineering sources) go too
far, and say that
reinforcement learning in general is too slow.

A large part of my research and the research I support (see links CURRENTLY
on has been aimed at developing more
powerful reinforcement learning methods
which can handler larger and larger problems. This can help -- up to a
point. For example, Don Wunsch at Texas Tech tells me he has developed a
neural net controller for a phsyical "fuzzy ball and beam" system, a
problem which Lotfi presented to the soft computing community as a tough
challenge just a year or two age.
(Also, these designs have all been formulated so that one can use elastic
fuzzy logic systems instead of neural nets in their implementation -- they
are higher-level designs.)

But as we understand these systems better, we are also beginning to
understand their limitations better.

A fundamental limitations of the traditional designs is that they learn to
predict some value measure at time t+1
based on information at time t. There are lots of ways to do that... but in
the end, values are updated based on a short time increment. In formal
terms, if the learning is perfect... each cycle of learning still advances
your effective planning horizon by only one time tick. (I believe I
discussed this horizon problem in Neural Networks for Control and in a
paper in Neural Networks in 1990... and later.) If, in practice, it would
take years of learning in order to plan hours into the future.. one cannot
achieve human-like capabilities. (Again, we have tricks to extend this, but
they are only partial solutions.)

A classic AI response to this problem is to use brute force. For example,
one may built several control systems, each using a different sampling time
-- a second, and hour, a day... all hard-wired. Thus there is a huge
literature in AI (e.g. Barto and Sutton) on multiple resolution time
scales. But it's a massive kludge, not elegant,
not brain-like, not smooth (are we doing "smooth computing?"), and not
formulated in a way which facilitates optimality or learning. There is an
analogy here with the old on-off style perceptrons, for which three-layer
structures COULD NOT be adapted effectively (ala Minsky); a crucial change
-- which enabled backpropagation -- was to accept the idea of using a
continuous function (the sigmoid) instead of an on-off switch.
I also remember how violently Minsky resisted that change, circa 1971, when
I discussed the idea of backpropagation with him ... the Aristotelian
ideology was very strong.

Perhaps the most standard way to address this time-scale problem in AI is
to use Aristotelian task-based planning,
where the unit of time is a "task" (or action schema) -- but where learning
is very, very awkward at best, for the same reasons. As he reads this,
Pribram will note that he wrote one of the classic books (with Miller and
Galanter(sp?)) on task-based approaches in psychology.

But do we really need explicit "chunking," where we somehow learn to
associate the situation at time t directly with the situation at time t+T?
Again, we have tricks which can reduce the need... but it is still there.
Or, more
positively, there is still some advantage to be gained by chunking. Also,
V.B. Brooks, in The Neural Basis of Motor Control, has demonstrated that
the brain does exploit some kind of chunking -- very different from the AI
but still there.

As of now, the best model/design I would have for a real intelligent system
would indeed contain a module to perform fuzzy chunking, based on fuzzy
action schemata. It is the only mechanism I can think of to yield both the
benefits of chunking, and the smoothness required for effective incremental
learning. I have even gone on,
in print, to theorize that the basal ganglia are the core physical location
in the brain of these fuzzy action schemata, and that the decisions to
activate these schemata are transmitted from layer V of the cerebral cortex
to the striatum. A critical task of this system is to make discrete choices
where discrete choices are required -- but to do so without relying on
kludgey Aristotelian mechanisms. The basic details are in my new paper,
Learning in the Brain: An Engineering Interpretation, in K.Pribram ed.,
Learning as Self-Organization, Erlbaum, 1996. Pribram, of course, is a very
important neuroscientist, and these views have been heavily influenced by
feedback I have received from him and his friends.

The mathematical details of this new design are unfortunately still in my
notebooks... mainly because of the incessant crises (and opportunities)
eating up my time.... And really, there are a few alternative
implementations possible in any case.

Best of luck,

Paul W.

P.S. NSF will soon be announcing a new initiative in Learning and
Intelligent Systems, with a first-year funding
just over $20 million. The plan is to put the announcement up on the web
site any week now:

physical location

Even now, the scope is not 100% clear, but the core will involve support of
collaborations across major disciplines
(e.g. biology, engineering, computer science, psychology, education),
hopefully to develop a more unified understanding of learning
mechanisms/models/designs/issues applicable to both natural and artificial