J.P. Thivierge and T.R. Shultz (Canada)
neural networks, information theory, ensemble learning,mixture-of-experts, AdaBoost.
Information networks learn by adjusting the amount of information fed to the hidden units. This technique can be expanded to manipulate the amount of information fed to modular experts in a network's architecture. After generating experts that vary in relevance, we show that competition among them can be obtained by information maximization. After generating equally relevant but diverging experts using AdaBoost, collaboration among them can be obtained by constrained information maximization. By controlling the amount of information fed to the experts, we can outperform a number of other mixture models on real world data.
Important Links:
Go Back