Parallel and Distributed Performance of Self-organizing Neural Grove

H. Inoue, M. Tsuda, and H. Narihisa (Japan)

Keywords

selforganization, pruning, parallel computation, improv ing generalization capability

Abstract

In this paper, we present the improving capability of ac curacy and the parallel efficiency of self-organizing neu ral groves (SONGs) for classification on a MIMD parallel computer. Self-generating neural networks (SGNNs) are originally proposed on adopting to classification or clus tering by automatically constructing self-generating neural tree (SGNT) from given training data. The SONG is com posed of plural SGNTs each of which is independentlygen erated by shuffling the order of the given training data, and the output of the SONG is voted all outputs of the SGNTs. We allocate each of SGNTs to each of processors in the MIMD parallel computer. Experimental results show that the more the number of processors increases, the more the classification accuracy increases for all problems.

Important Links:



Go Back