Assistant Laboratory Director, Speech Technology and Research Laboratory (STAR)
Aaron Lawson, Ph.D., is Assistant Director of SRI International’s Speech Technology and Research (STAR) Laboratory. His research interests include voice forensics and biometrics, language and speaker identification from speech, social media information extraction, noise robustness, and fielding systems.
At SRI, Lawson is the co-PI of the SAVI team for the DARPA MediFor program and is SRI transition lead for the DARPA RATS program. He currently leads the Human Language Technology project with JHU Applied Physics Lab, Forensic Speaker Recognition projects with the Federal Bureau of Investigation and transition efforts with the Navy and Special Operations Forces. He was PI for SRI’s LinguaKey team for DARPA Active Authentication. Past projects include IARPA REYNARD and BEST.
Prior to joining SRI, Lawson was a research scientist at Air Force Research Laboratory/RADC in the Audio Processing Group. Earlier, he was a natural language processing researcher at TextWise, LLC.
Lawson has published more than 30 papers covering speech, natural language and linguistics. His Ph.D. in applied linguistics is from Cornell University.
Recent publicationsmore +
In this work, we extend the TBC method, proposing a new similarity metric for selecting training data that results in significant gains over the one proposed in the original work.
Robust Speaker Recognition from Distant Speech under Real Reverberant Environments Using Speaker Embeddings
This article focuses on speaker recognition using speech acquired using a single distant or far-field microphone in an indoors environment.
In this study, our aim is analyzing the behavior of the speaker recognition systems based on speaker embeddings toward different front-end features, including the standard MFCC, as well as PNCC,…
This work is a multi-organizational effort led by SRI International and Lab41 with the intent to push forward state-of-the-art distant microphone approaches in signal processing and speech recognition.
Analysis of Phonetic Markedness and Gestural Effort Measures for Acoustic Speech-Based Depression Classification
In this paper we analyze articulatory measures to gain further insight into how articulation is affected by depression.
We benchmark these approaches on several distinctly different databases, after we describe our SRICON-UAM team system submission for the NIST 2016 SRE.