Investigating Generalization in Parallel Evolutionary Artificial Neural Networks

K. Davoian and W.-M. Lippe (Germany)


Artificial Neural Networks, Evolutionary Algorithms, Parallelization, Generalization


In this paper we study how the parallelization of a learning algorithm affects the generalization ability of Evolutionary Artificial Neural Networks (EANNs). The newly proposed evolutionary algorithm (EA), which improves chromosomes according to characteristics of their genotype and phenotype, was used for evolving ANNs. The EA has been parallelized by two schemes: the migration approach, which periodically exchanges the best individuals between all parallel populations, and the recently developed migration-strangers strategy, which extends search space during evolution by the replacement of worst chromosomes in parallel populations with the randomly generated new ones, called strangers. The experiments have been provided on the Mackey-Glass chaotic time series problem in order to determine the best and the average prediction errors on training and testing data for small and large ANNs, evolved by both parallel evolutionary algorithms (PEAs). The results showed that PEAs enable to produce compact ANNs with high precision of prediction, which have insignificant distinctions between training and testing errors.

Important Links:

Go Back