An anticorrelation kernel for subsystem training in multiple classifier systems

Citation

L. Ferrer, K. Sonmez and E. Shriberg, “An anticorrelation kernel for subsystem training in multiple classifier systems,” Journal of Machine Learning Research, vol. 10, pp. 2079–2114, September 2009.

Abstract

We present a method for training support vector machine (SVM)-based classification systems for combination with other classification systems designed for the same task. Ideally, a new system should be designed such that, when combined with existing systems, the resulting performance is optimized. We present a simple model for this problem and use the understanding gained from this analysis to propose a method to achieve better combination performance when training SVM systems. We include a regularization term in the SVM objective function that aims to reduce the average class-conditional covariance between the resulting scores and the scores produced by the existing systems, introducing a trade-off between such covariance and the system’s individual performance. That is, the new system “takes one for the team”, falling somewhat short of its best possible performance in order to increase the diversity of the ensemble. We report results on the NIST 2005 and 2006 speaker recognition evaluations (SREs) for a variety of subsystems. We show a gain of 19% on the equal error rate (EER) of a combination of four systems when applying the proposed method with respect to the performance obtained when the four systems are trained independently of each other.

Keywords: system combination, ensemble diversity, multiple classifier systems, support vector machines, speaker recognition, kernel methods


Read more from SRI