Mixtures of experts models consist of a set of experts, which model conditional probabilistic processes, and a gate which combines the probabilities of the experts. The probabilistic basis for the mixture of experts is that of a mixture model in which the experts form the input conditional mixture components while the gate outputs form the input conditional mixture weights. A straightforward generalisation of ME models is the hierarchical mixtures of experts (HME) class of models, in which each expert is made up of a mixture of experts in a recursive fashion.
This principle states that complex problems can be better solved by decomposing them into smaller tasks. In mixtures of experts the assumption is that there are separate processes in the underlying process of generating the data. Modelling of these processes is performed by the experts while the decision of which process to use is modelled by the gate.
Mixtures of experts have many connections with other algorithms such as tree-based methods, mixture models and switching regression. In this, I review the paper by Rasmussen and Ghahramani to see how closely the mixtures of experts model resembles these other algorithms, and what is novel about it. The aim of this review is to adopt the method used in the current article to local precipitation data.
- Quote paper
- Jula Kabeto Bunkure (Author), 2020, Mixture of expert models. Statistical analysis method, Munich, GRIN Verlag, https://www.grin.com/document/595710
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.