Simplifying Potential Learning by Supposing Maximum and Minimum Information for Improved Generalization and Interpretation

Ryozo Kitajima and Ryotaro Kamimura


neural networks, information-theoretic approach, potential learning, maximum information learning


In this paper, we propose a new computational method for potential learning. Potential learning was previously developed based on information-theoretic methods to improve the generalization and interpretation performance of neural networks. Information-theoretic methods have been used extensively in neural networks but have presented two main problems, namely, difficulty in specifying which neurons should be fired and computational complexity. Fundamentally, potential learning aims to identify which neurons should be fired when using information theoretic methods. Though the potential learning method has so far produced better generalization performance, its parameter needs to be extensively controlled for practical use. To solve this problem, we propose here a computational method which supposes maximum and minimum information before learning. By doing this, we can use the potential learning without the extensive tuning of the parameter. The experimental results for two datasets showed that generalization performance could be improved compared to other conventional methods. In particular, the maximum information method showed superior generalization performance. In addition, final connection weights could be easily interpreted when the supposed information maximization was applied because of its ability to produce explicit connection weights for a small number of important neurons.

Important Links:

Go Back