Extending Boosting for Large Scale Spoken Language Understanding

Citation

Tur, G. (2007). Extending boosting for large scale spoken language understanding. Machine Learning, 69(1), 55-74.

Abstract

We propose three methods for extending the Boosting family of classifiers motivated by the real-life problems we have encountered. First, we propose a semisupervised learning method for exploiting the unlabeled data in Boosting.We then present a novel classification model adaptation method. The goal of adaptation is optimizing an existing model for a new target application, which is similar to the previous one but may have different classes or class distributions. Finally, we present an efficient and effective cost-sensitive classification method that extends Boosting to allow for weighted classes. We evaluated these methods for call classification in the AT&T VoiceTone(r) spoken language understanding system. Our results indicate that it is possible to obtain the same classification performance by using 30% less labeled data when the unlabeled data is utilized through semisupervised learning. Using model adaptation we can achieve the same classification accuracy using less than half of the labeled data from the new application. Finally, we present significant improvements in the ā€œimportantā€ (i.e., higher weighted) classes without a significant loss in overall performance using the proposed cost-sensitive classification method.


Read more from SRI