Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option



L. Ferrer, M. K. Nandwana, M. L. McLaren, D. Castan and A. Lawson, “Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing.


The output scores of most speaker recognition systems are not directly interpretable as stand-alone values.  For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios (LR), which have a clear probabilistic interpretation.  The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions.  This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trialbased calibration (TBC).  TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial.  In this work, we extend the TBC method, proposing (1) a new similarity metric for selecting training data that results in significant gains over the one proposed in the original work, (2) a new option that enables the system to reject a trial when not enough matched data is available for training the calibration model, and (3) the use of regularization to improve the robustness of the calibration models trained for each trial.  We test the proposed algorithms on a development set composed of several conditions and on the FBI multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most conditions when matched calibration data is available for selection and that it can reject most trials for which relevant calibration data is unavailable.

Read more from SRI