We will have a special seminar to take advantage of Prof. Bersini's presence
in Berkeley.
Thank you,
Michael Lee
ps. To unsubscribe, please send a message to bisc-request@diva.eecs.berkeley.ed
and not to the bisc-group directly.
-- _______________________________________________________________________ Michael A. Lee Post Doctoral Researcher Berkeley Initiative in Soft Computing Tel: +1-510-642-9827 Computer Science Division Fax: +1-510-642-5775 University of California Email: leem@cs.berkeley.edu Berkeley, CA 94720-1776 USA WWW: http://www.cs.berkeley.edu/~leem _______________________________________________________________________Topic: NOW COMES THE TIME TO DEFUZZIFY NEURO-FUZZY MODELS
Speaker: Hugues Bersini
IRIDIA cp 194/6 Universite Libre de Bruxelles 50, av. Franklin Roosevelt 1050 Bruxelles - Belgium phone: +32.2 650.27.33 Fax: +32.2 650.27.15 bersini@ulb.ac.be
Location:
2 July 1996 320 Soda Hall 2:00-3:00pm
Abstract:
Fuzzy models present a singular Janus-faced: On the one hand, they are knowledge-based software environments constructed from a collection of linguistic IF-THEN rules, and on the other hand, they realize nonlinear mappings which have interesting mathematical properties like "low-order interpolation" and "universal function approximation". Neuro-fuzzy basically provides fuzzy models with the capacity, based on the available data, to compensate for the missing human knowledge by an automatic self-tuning of the structure and the parameters. A first consequence of this hybridization between the architectural and representational aspect of fuzzy models and the learning mechanisms of neural networks has been to progressively increase and fuzzify the contrast between the two Janus faces: readability or performance. I will first discuss these two visions of fuzzy models and their degree of compatibility.
Then, when adopting the second vision of fuzzy models i.e. as a way of realizing a non-linear mapping by means of a smooth cooperation (by Gaussians mixture) of several local experts (often linear models), two simultaneous types of automatic tuning are required: one concerns the number of local experts whereas the other concerns the fine adjustment of the Gaussian zones and of the linear local models. I will present and experimentally compare three learning strategies: one incremental, gradually adding new experts to improve the current approximation; one decremental, gradually reducing the number of experts without degrading the approximation; one evolutionary performing a global combinatorial search in the space of possible structures.