• Skip to primary navigation
  • Skip to main content
SRI logo
  • About
    • Press room
    • Our history
  • Expertise
    • Advanced imaging systems
    • Artificial intelligence
    • Biomedical R&D services
    • Biomedical sciences
    • Computer vision
    • Cyber & formal methods
    • Education and learning
    • Innovation strategy and policy
    • National security
    • Ocean & space
    • Quantum
    • QED-C
    • Robotics, sensors & devices
    • Speech & natural language
    • Video test & measurement
  • Ventures
  • NSIC
  • Careers
  • Contact
  • 日本支社
Search
Close
Home » Archives for Colleen Richey » Page 2
Colleen Richey

Colleen Richey

Research Linguist, Speech Technology and Research Laboratory
Go to bio page

Publications

Speech & natural language publications May 1, 2014 Conference Paper

Lexical Stress Classification for Language Learning Using Spectral and Segmental Features

Victor Abrash, Kristin Precoda, Horacio Franco, Harry Bratt, Colleen Richey

We present a system for detecting lexical stress in English words spoken by English learners.  The system uses both spectral and segmental features to detect three levels of stress for each syllable in a word. 

Multi-modal data analytics publications May 1, 2014

Emotion Detection in Speech Using Deep Networks

Ajay Divakaran, Colleen Richey

We propose a novel staged hybrid model for emotion detection in speech. Hybrid models exploit the strength of discriminative classifiers along with the representational power of generative models.

Speech & natural language publications May 1, 2013

Articulatory trajectories for large-vocabulary speech recognition

Colleen Richey

We present a neural network model to estimate articulatory trajectories from speech signals where the model was trained using synthetic speech signals generated by Haskins Laboratories’ task-dynamic model of speech production.

Speech & natural language publications March 1, 2012

Detecting Leadership and Cohesion in Spoken Interactions

Kristin Precoda, Colleen Richey

We present a system for detecting leadership and group cohesion in multiparty dialogs and broadcast conversations in English and Mandarin.

Speech & natural language publications August 1, 2011

Using Prosodic and Spectral Features in Detecting Depression in Elderly Males

Dimitra Vergyri, Colleen Richey

In this study, we focus on speech features that can identify the speaker’s emotional health, i.e., whether the speaker is depressed or not. 

Speech & natural language publications June 1, 2011 Conference Paper

Detection of agreement and disagreement in broadcast conversations

Kristin Precoda, Colleen Richey

We present Conditional Random Fields based approaches for detecting agreement/disagreement between speakers in English broadcast conversation shows.

Information & computer science publications May 1, 2011 Conference Paper

Automatic identification of speaker role and agreement/disagreement in broadcast conversation

Kristin Precoda, Colleen Richey

We present supervised approaches for detecting speaker roles and agreement/disagreement between speakers in broadcast conversation shows in three languages: English, Arabic, and Mandarin.

Speech & natural language publications May 1, 2011

Acoustic data sharing for Afghan and Persian languages

Dimitra Vergyri, Colleen Richey, Andreas Kathol

In this work, we compare several known approaches for multilingual acoustic modeling for three languages, Dari, Farsi and Pashto,

Speech & natural language publications June 1, 2010 Conference Paper

Improving language recognition with multilingual phone recognition and speaker adaptation transforms

Colleen Richey

We investigate a variety of methods for improving language recognition accuracy based on techniques in speech recognition, and in some cases borrowed from speaker recognition. First, we look at the question of language-dependent versus language-independent phone recognition for phonotactic (PRLM) language recognizers, and find that language-independent recognizers give superior performance in both PRLM and PPRLM systems. We then investigate ways to use speaker adaptation (MLLR) transforms as a complementary feature for language characterization. Borrowing from speech recognition, we find that both PRLM and MLLR systems can be improved with the inclusion of discriminatively trained multilayer perceptrons as front ends. Finally, we compare language models to support vector machines as a modeling approach for phonotactic language recognition, and find them to be potentially superior, and surprisingly complementary.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you.

Expect a response within 48 hours.

Career call to action image

Make your own mark.

Search jobs

Our work

Case studies

Publications

Timeline of innovation

Areas of expertise

Institute

Leadership

Press room

Media inquiries

Compliance

Careers

Job listings

Contact

SRI Ventures

Our locations

Headquarters

333 Ravenswood Ave
Menlo Park, CA 94025 USA

+1 (650) 859-2000

Subscribe to our newsletter


日本支社
SRI International
  • Contact us
  • Privacy Policy
  • Cookies
  • DMCA
  • Copyright © 2022 SRI International