Combining Forward and Backward WTA for Partially Activated Neural Networks

R. Kamimura (Japan)

Keywords

partially activated, winnertakesall, competition, controlled, forward, backward competition

Abstract

In this paper, we introduce a new type of learning procedure in a partially activated network. The partially activated network has been introduced to simplify a complex hierarchical network. In partially activation, only some parts of the network are activated and used to produce outputs. In the previous studies, we used the conventional winner-takes all algorithm to activate a neuron. However, we have found that the conventional WTA has been ineffective for certain problems. Thus, we introduce a new type of competition, that is, backward winner-takes-all. In this method, a winner is chosen by seeing the error back-propagated from the outputs. We apply this type of partially activated network to a student survey. Experimental results showed that the previous method with conventional WTA could not produce appropriate outputs. On the other hand, the new method could immediately solve the problem. These results show that partially activated networks with backward WTA can be very effective in learning for some problems that conventional WTA could not solve.

Important Links:



Go Back