Information-theoretic Self-organizing Maps with Minkowski Distance

R. Kamimura, S. Aida-Hyugaji, and Y. Maruyama (Japan)

Keywords

: mutual information maximization, competi tive learning, cooperation, Minkowski distance

Abstract

In this paper, we propose a new computational method for information theoretic self-organizing maps[1] to extract salient features and to accelerate learning. We have pro posed an information theoretic method for self-organizing maps. The method aims to control competitive processes flexibly, that is, to produce different competitive unit acti vations according to information content obtained in learn ing. Competition is realized by maximizing mutual in formation between input patterns and competitive units. Competitive unit outputs are computed by the inverse of distance between input patterns and competitive unit. As distance is smaller, a neuron tends to fire strongly. Thus, winning neurons represent faithfully input patterns. How ever, one shortcoming of this method is that learning is very slow, and sometimes information is not sufficiently increased to produce clear maps. To remedy this short coming, we propose a new computational method with Minkowski distance between input patterns and connec tions weights. By changing the parameter in Minkowski distance, we can ignore some detailed parts in input pat terns, which contributes to accelerating learning and to extract main features in input patterns. We applied our method to a simple artificial data problem and a chemi cal structure-activity relationship analysis. In both cases, experimental results confirmed that the new method can produce the final solutions in significantly shorter training cycles and more salient features can be extracted.

Important Links:



Go Back