Multiexpert combination
Multiexpert combination methods , have base learners that work in parallel. These methods can in turn be divided into two:
1 In the global approach, also called learner fusion, given an input, all base learners generate an output and all these outputs are used.
Examples are voting and stacking.
In the local approach, selection, for example, in mixture of experts, there is a gating model, which looks at the input and chooses one (or very few) of the learners as responsible for generating the output.
Multistage combination
Multistage combination methods use a serial approach where the next base learners is trained with or tested on only the instances where the previous base learners are not accurate enough.
The complexity so that a complex base learners is not used unless the preceding simpler base learners are not confident.
