• Skip to primary navigation
  • Skip to main content
SRI InternationalSRI mobile logo

SRI International

SRI International - American Nonprofit Research Institute

  • About
    • Blog
    • Press room
  • Expertise
    • Advanced imaging systems
    • Artificial intelligence
    • Biomedical R&D services
    • Biomedical sciences
    • Computer vision
    • Cyber & formal methods
    • Education and learning
    • Innovation strategy and policy
    • National security
    • Ocean & space
    • Quantum
    • QED-C
    • Robotics, sensors & devices
    • Speech & natural language
    • Video test & measurement
  • Ventures
  • NSIC
  • Careers
  • Contact
  • 日本支社
Show Search
Hide Search
Home » Archives for Horacio Franco

Horacio Franco

SRI Author

  • Horacio Franco

    Chief Scientist, Speech Technology and Research Laboratory

    View all posts

Speech & natural language publications July 22, 2020 Conference Paper

Wideband Spectral Monitoring Using Deep Learning

Horacio Franco, Martin Graciarena July 22, 2020

We present a system to perform spectral monitoring of a wide band of 666.5 MHz, located within a range of 6 GHz of Radio Frequency (RF) bandwidth, using state-of-the-art deep learning approaches. The system detects, labels, and localizes in time and frequency signals of interest (SOIs) against a background of wideband RF activity. We apply a hierarchical approach. At the lower level we use a sweeping window to analyze a wideband spectrogram, which is input to a deep convolutional network that estimates local probabilities for the presence of SOIs for each position of the window. In a subsequent, higher-level processing step, these local frame probability estimates are integrated over larger two-dimensional regions that are hypothesized by a second neural network, a region proposal network, adapted from object localization in image processing. The integrated segmental probability scores are used to detect SOIs in the hypothesized spectro-temporal regions.

Speech & natural language publications June 1, 2018 Conference Proceeding

Voices Obscured in Complex Environmental Settings (VOiCES) corpus

Colleen Richey, Horacio Franco, Aaron Lawson, Allen Stauffer June 1, 2018

This paper introduces the Voices Obscured in Complex Environmental Settings (VOiCES) corpus, a freely available dataset under Creative Commons BY 4.0. This dataset will promote speech and signal processing research of speech recorded by far-field microphones in noisy room conditions. Publicly available speech corpora are mostly composed of isolated speech at close-range microphony. A typical approach to better represent realistic scenarios, is to convolve clean speech with noise and simulated room response for model training. Despite these efforts, model performance degrades when tested against uncurated speech in natural conditions. For this corpus, audio was recorded in furnished rooms with background noise played in conjunction with foreground speech selected from the LibriSpeech corpus. Multiple sessions were recorded in each room to accommodate for all foreground speech-background noise combinations. Audio was recorded using twelve microphones placed throughout the room, resulting in 120 hours of audio per microphone. This work is a multi-organizational effort led by SRI International and Lab41 with the intent to push forward state-of-the-art distant microphone approaches in signal processing and speech recognition.

Speech & natural language publications December 1, 2017 Conference Proceeding

Tackling Unseen Acoustic Conditions in Query-by-Example Search Using Time and Frequency Convolution for Multilingual Deep Bottleneck Features

Horacio Franco, Dimitra Vergyri December 1, 2017

Standard keyword spotting based on Automatic Speech Recognition (ASR) cannot be used on low- and no-resource languages due to lack of annotated data and/or linguistic resources.  In recent years, query-by-example (QbE) has emerged as an alternate way to enroll and find spoken queries in large audio corpora,  yet  mismatched  and  unseen  acoustic  conditions  remain a  difficult  challenge  given  the  lack  of  enrollment  data.  This paper revisits two neural network architectures developed for noise  and  channel-robust  ASR,  and  applies  them  to  building a  state-of-art  multilingual  QbE  system.   By  applying  convolution in time or frequency across the spectrum, those convolutional bottlenecks learn more discriminative deep bottleneck features.  In conjunction with dynamic time warping (DTW), these features enable robust QbE systems.  We use the MediaEval 2014 QUESST data to evaluate robustness against language  and  channel  mismatches,  and  add  several  levels  of  artificial  noise  to  the  data  to  evaluate  performance  in  degraded acoustic environments.  We also assess performance on an Air Traffic Control QbE task with more realistic and higher levels of distortion in the push-to-talk domain.

Speech & natural language publications December 1, 2017 Conference Proceeding

Noise-robust Exemplar Matching for Rescoring Query-by-Example Search

Horacio Franco December 1, 2017

This paper describes a two-step approach for keyword spotting task in which a query-by-example (QbE) search is followed by noise robust exemplar matching (N-REM) rescoring.  In the first stage, subsequence dynamic time warping is performed to detect keywords in search utterances.  In the second stage, these target frame sequences are rescored using the reconstruction errors provided by the linear combination of the available exemplars extracted from the training data.  Due to data sparsity, we align the target frame sequence and the exemplars to a common frame length and the exemplar weights are obtained by solving a convex optimization problem with non-negative sparse coding.   We run keyword spotting experiments on the  Air  Traffic  Control  (ATC)  database  and  evaluate  performance of multiple distance metrics for calculating the weights and reconstruction errors using convolutional neural network (CNN) bottleneck features.  The results demonstrate that the proposed two-step keyword spotting approach provides better keyword detection compared to a baseline with only QbE search

Speech & natural language publications August 1, 2017 Tech Report

Leveraging Deep Neural Network Activation Entropy to Cope with Unseen Data in Speech Recognition

Horacio Franco August 1, 2017

Unseen data conditions can inflict serious performance degradation on systems relying on supervised machine learning algorithms.  Because data can often be unseen, and because traditional machine learning algorithms are trained in a supervised manner, unsupervised adaptation techniques must be used to adapt the model to the unseen data conditions.  However, unsupervised adaptation is often challenging, as one must generate some hypothesis given a model and then use that hypothesis to bootstrap the model to the unseen data conditions.  Unfortunately, reliability of such hypotheses is often poor, given the mismatch between the training and testing datasets.  In such cases, a model hypothesis confidence measure enables performing data selection for the model adaptation. Underlying this approach is the fact that for unseen data conditions, data variability is introduced to the model, which the model propagates to its output decision, impacting decision reliability.  In a fully connected network, this data variability is propagated as distortions from one layer to the next.  This work aims to estimate the propagation of such distortion in the form of network activation entropy, which is measured over a short-time running window on the activation from each neuron of a given hidden layer, and these measurements are then used to compute summary entropy.  This work demonstrates that such an entropy measure can help to select data for unsupervised model adaptation, resulting in performance gains in speech recognition tasks.  Results from standard benchmark speech recognition tasks show that the proposed approach can alleviate the performance degradation experienced under unseen data conditions by iteratively adapting the model to the unseen data’s acoustic condition.

Speech & natural language publications March 1, 2017 Conference Proceeding

Joint modeling of articulatory and acoustic spaces for continuous speech recognition tasks

Dimitra Vergyri, Horacio Franco March 1, 2017

Articulatory information can effectively model variability in speech and can improve speech recognition performance under varying acoustic conditions.  Learning speaker-independent articulatory models has always been challenging, as speaker-specific information in the articulatory and acoustic spaces increases the complexity of the speech-to-articulatory space inverse modeling, which is already an ill-posed problem due to its inherent nonlinearity and  non-uniqueness.  This  paper  investigates using deep neural networks (DNN) and convolutional neural networks (CNNs) for mapping speech data into its corresponding articulatory space.  Our results indicate that the CNN models perform better than their DNN counterparts for speech inversion.  In addition, we used the inverse models to generate articulatory trajectories from speech for three different standard speech recognition tasks.  To effectively model the articulatory features’ temporal modulations while retaining the acoustic features’ spatiotemporal signatures, we explored a joint modeling strategy to simultaneously learn both the acoustic and articulatory spaces.  The results from multiple speech recognition tasks indicate that articulatory features can improve recognition performance when the acoustic and articulatory spaces are jointly learned with one common objective function.

Speech & natural language publications March 1, 2017 Conference Proceeding

Speech recognition in unseen and noisy channel conditions

Horacio Franco, Martin Graciarena, Dimitra Vergyri March 1, 2017

Speech recognition in varying background conditions is a challenging problem. Acoustic condition mismatch between training and evaluation data can significantly reduce recognition performance. For mismatched conditions, data-adaptation techniques are typically found to be useful, as they expose the acoustic model to the new data condition(s). Supervised adaptation techniques usually provide substantial performance improvement, but such gain is contingent on having labeled or transcribed data, which is often unavailable. The alternative is unsupervised adaptation, where feature-transform methods and model-adaptation techniques are typically explored. This work investigates robust features, feature-space maximum likelihood linear regression (fMLLR) transform, and deep convolutional nets to address the problem of unseen channel and noise conditions. In addition, the work investigates bottleneck (BN) features extracted from deep autoencoder (DAE) networks trained by using acoustic features extracted from the speech signal. We demonstrate that such representations not only produce robust systems but also that they can be used to perform data selection for unsupervised model adaptation. Our results indicate that the techniques presented in this paper significantly improve performance of speech recognition systems in unseen channel and noise conditions.

Speech & natural language publications September 1, 2016 Conference Paper

Coping with Unseen Data Conditions: Investigating Neural Net Architectures, Robust Features, and Information Fusion for Robust Speech Recognition

Horacio Franco September 1, 2016

The introduction of deep neural networks has significantly improved automatic speech recognition performance. For real-world use, automatic speech recognition systems must cope with varying background conditions and unseen acoustic data. This work investigates the performance of traditional deep neural networks under varying acoustic conditions and evaluates their performance with speech recorded under realistic background conditions that are mismatched with respect to the training data. We explore using robust acoustic features, articulatory features, and traditional baseline features against both in-domain microphone channel-matched and channel-mismatched conditions as well as out-of-domain data recorded using far- and near-microphone setups containing both background noise and reverberation distortions. We investigate feature-combination techniques, both outside and inside the neural network, and explore neural-network-level combination at the output decision level. Results from this study indicate that robust features can significantly improve deep neural network performance under mismatched, noisy conditions, and that using multiple features reduces speech recognition error rates. Further, we observed that fusing multiple feature sets at the convolutional layer feature-map level was more effective than performing fusion at the input feature level or at the neural-network output decision level.

Speech & natural language publications September 1, 2016 Conference Paper

Unsupervised Learning of Acoustic Units Using Autoencoders and Kohonen Nets

Dimitra Vergyri, Horacio Franco September 1, 2016

Often, prior knowledge of subword units is unavailable for low-resource languages. Instead, a global subword unit description, such as a universal phone set, is typically used in such scenarios. One major bottleneck for existing speech-processing systems is their reliance on transcriptions. Unfortunately, the preponderance of data becoming available everyday is only worsening the problem, as properly transcribing, and hence making this data useful for training speech-processing models, is impossible. This work investigates learning acoustic units in an unsupervised manner from real-world speech data by using a cascade of an autoencoder and a Kohonen net. For this purpose, a deep autoencoder with a bottleneck layer at the center was trained with multiple languages. Once trained, the bottleneck-layer output was used to train a Kohonen net, such that state-level ids can be assigned to the bottleneck outputs. To ascertain how consistent such state-level ids are with respect to the acoustic units, phone-alignment information was used for a part of the data to qualify if indeed a functional relationship existed between the phone ids and the Kohonen state ids and, if yes, whether such relationship can be generalized to data that are not transcribed.

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you.

Expect a response within 48 hours.

Career call to action image

Make your own mark.

Search jobs
Our work

Case studies

Publications

Timeline of innovation

Areas of expertise

Blog

Institute

Leadership

Press room

Media inquiries

Compliance

Privacy policy

Careers

Job listings

Contact

SRI Ventures

Our locations

Headquarters

333 Ravenswood Ave
Menlo Park, CA 94025 USA

+1 (650) 859-2000

Subscribe to our newsletter

日本支社

SRI International

  • Contact us
  • Privacy Policy
  • Cookies
  • DMCA
  • Copyright © 2022 SRI International