FCM discussion

Jim Kennedy (jimk01@PROBLEM_WITH_INEWS_GATEWAY_FILE)
Mon, 15 Apr 1996 15:12:21 +0200


Maurice Clerc (mcft10@calvanet.calvacom.fr) and I have been having an
interesting email discussion that began on this newsgroup. This posting
is a summary of our discussion of Fuzzy Cognitive Maps (FCM)

On 18 Dec 1995, in comp.ai.fuzzy, I wrote:
>I re-analyzed BK's connection matrix using particle swarm optimization.
>Kosko clamps on the first concept node and estimates thresholded values
>for the other eight nodes, to demonstrate how investment (concept #1)
>affects the other variables in the system.
>
>He reports that the system settles on this vector of concepts:
>
> (1 1 1 1 0 0 0 1 1)
>
>The alternative optimization method, however, resulted in this vector:
>
> (1 0 0 1 1 1 1 0 0)
>...
>I would appreciate if readers of this newsgroup would please check these
>results, and if possible explain to me why Kosko's results are better.

I received no serious replies to that posting, and assumed that nobody reading this newsgroup cared enough about FCMs to work through Kosko's examples.

Then, a couple of weeks later, I received email from Maurice, with some
perceptive comments. First he noted that the graph and the matrix in
Kosko's Neural Networks & Fuzzy Systems did not match. Then he worked out
the matrix in Excel format, only changing the concept node values from
{0,1} to {-1,+1}.
He wrote:
>Let's suppose the matrix contains the right data. We can
have a good idea of the results by solving the system
Excel like notation):
> C1=1
> C2=IF((C1-C5)>0;MIN(1;C1-C5);MAX(-1;C1-C5))
> C3=IF((C1+C2-C5)>0;MIN(1;C1+C2-C5);MAX(-1;C1+C2-C5))
> C4=IF((C3+C6)>0;MIN(1;C3+C6);MAX(-1;C3+C6))
> C5=IF((C7-C9)>0;MIN(1;C7-C9);MAX(-1;C7-C9))
> C6=IF((-C3+C4+C5)>0;MIN(1;-C3+C4+C5);MAX(-1;-C3+C4+C5))
> C7=IF((C4+C5-C6-C8)>0;MIN(1;C4+C5-C6-C8);MAX(-1;C4+C5-C6-C8))
> C8=IF((C1+C2+C3-C6-C7+C9)>0;MIN(1;C1+C2+C3-C6-C7+C9);MAX(-1;C1+C2+C3-C6-C7+C9))
> C9=IF((C1+C3-C4)>0;MIN(1;C1+C3-C4);MAX(-1;C1+C3-C4))
>
>I think the threshold operation of Kosko is not very good.
In particular, if a concept becomes negative (-1), he gives
>the value 0. If we have the following meanings :
> 1 := increasing
> 0 := no effect
> -1 := decreasing
>the equations above are better. (They are in fact a very
particular case of
>"propagative logic").
>
>A solution is then (1 1 1 0 -1 -1 -1 1 1)
>Unfortunately, it is neither yours nor the one of Kosko ;-)
>
>Nevertheless, another solution
>(1 0 0 1 1 1 1 -1 0)
>is very near of yours
>
>As you can see, the model of Kosko is not very satisfying,
for we can find several solutions which are very different.
>
>In fact, having only three values is not enough. We need
>weights", as Kosko himself is writing (p. 156). Personally,
>I think the weights should be fuzzy values. And the links
>must have also a "time length". It is not at
>all the same thing if C1 implies C2 before C8, or a long time after.
>

I replied:
>I like your approach to FCM as a system of inequalities,
>but I don't think nodes should be in [-1,+1]. Out of
>curiosity, since you seem to have the ExCel program
>written, could you try the system with 0,1 and see what
>happens?

Maurice worked it out in Excel, with this reply:
>It is very simple: -1 becomes 0. So the first solution is
>(1 1 1 0 0 0 0 1 1) which is very near of what Kosko finds,
>and exactly(except of course for C1) the opposite of yours.
>And the other one is (1 0 0 1 1 1 1 0 0) which is exactly your solution !

We then got into a discussion of the effect of time in a FCM. I had assumed that two concepts interacted simultaneously, as two people observing one another and being observed at the same time, while Maurice saw FCM as a sequential model with feedback. Because of these different views of time in an FCM, we had used different optimization functions. Maurice wrote:

> So, as I have said, this model is too rough, for it can
>give two solutions mathematically acceptable, but whose
>meanings are incompatible. This solutions are found just
>choosing in which order activations have to be computed.
>So, as I have also written, "time" is important.
>

And I said:
>On the other hand, one of the appealing things about FCM is
>the possibility of modeling reciprocal causation or, more
>likely,implication, when one concept implies another, and
>vice versa, simultaneously. I think if you introduced time
>you would just end up with a sequential decision tree,
>wouldn't you?
>
>One difference in my approach is that I don't assume that
>more causes mean the effect is more likely. I optimize so
>causes summed closer to 1.0 estimate a 1.0 node better than
>sums > 1.0. In other words I minimize abs(node_value -
>summed_causes).

Maurice asked:
> Why not ? But is there as "psychological" reason for this
>approach ?

I replied:
>No, not a psychological reason, more a dynamical systems
>reason. I think of this example. Imagine you have some
>good bowlers. Each one of them is capable of bowling a
>strike, in other words, connections from each one
>to the concept "strike" are near 1.0. Now imagine that you
>had two bowlers rolling their balls at the same time.
>Dor you think a strike would become *more* likely??? I
>don't think so. I think the causes would interact to make
>it less likely. Again, I'm assuming that causes are
>simultaneous. If they took turns, causal influence would
>increase to 1.0...

Maurice wrote:
>
> ...In fact, in FCM, there _is_ already an implicit delay,
>equal to 1 for each link. I just think the values should
>sometimes be different.
> And the result is not a sequential decision tree, if there is at least a
> cycle in the graph.
>
> Example (trivial) in a [0,1] model:
> A =>+ B delay 1
> B =>+ C delay 1
> C =>- A delay x
>
> At the beginning, the vector (A,B,C) is (1,?,?).
> If x=1 the result is:
> step A B C
> 0 1 ? ?
> 1 1 1 ?
> 2 0 1 1
> 3 0 0 1
> 4 0 0 0
> 5 0 0 0
>
> which can be interpreted "Finally, nothing happens"
>
> but if x=n>1, say 3, then we have a "transitory
>consequences",
> which could be very important:
> step A B C
> 0 1 ? ?
> 1 1 1 ?
> 2 1 1 1
> 3 1 1 1 transitory consequences
> 4 1 1 1
>
> 5 0 1 1
> 2 0 1 1
> 3 0 0 1
> 4 0 0 0
> 5 0 0 0

I wrote:
>You're right, of course. I guess as a psychologist I'm
>interested in simultaneous reciprocation. When people
>interact (except thru email) they observe one another and
>are observed at the same moment. One thing that seemed
>good about FCM was the ability to handle that kind of
>feedback...

Maurice returned to my pitiful "bowling" example:
>
> Let A be "good bowler 1"
> B "good bowler 2"
> C "strike"
> The rules are something like:
> A => C (0.9)
> B => C (0.9)
>
> and you say
> A and B => C (x) x < 0.9, say 0.7

I said:
>Probably lower than that. Two balls rolling down the lane
>will hit one another and roll into the gutter. If they
>took turns, of course, C would increase.

Maurice replied:
>
> You are probably right.( In fact, I don't know, I am _not_
>a good bowler ;-) )
> But what happens if
> A "testimony 1"
> B "testimony 2"
> C "evidence"
>
> Then the rules are probably something like:
> A => C (0.5)
> B => C (0.5)

I wrote:
>A="I saw him stab her with a kitchen knife"
>B="This bullet came out of her abdomen"
>?????
>
>A and B => 0.1 (and OJ goes free)
>

Maurice nailed me with a reference to Voltaire:
>
> A and B => (0.9)
> (even if Voltaire has proved in the Calas case how this
>kind or rules
> could be dangerous)
>
> So the rules in what I call "cumulative logic" are unfortunately depending
> of the meaning of the concepts. For example the Ron Sun's
>model is a "propagative and cumulative logic", but it does
>not take really the meanings into account.
>
We have been discussing this topic for quite awhile now, and would be interested
to hear what insights others have on the topic.

Jim