Technical Director, Vision and Learning Laboratory, Center for Vision Technologies
Dr. Yi Yao is a Technical Director at SRI with significant experience in computer vision, machine learning, satellite image processing, data mining, social media exploitation, visual surveillance, object recognition, event recognition, and object tracking. She has 50+ papers in peer-reviewed journals and conferences, and 10+ granted patents. Dr. Yao is the PI for DARPA GoL: topological analysis-based study of deep learning; DARPA LwLL: data-driven semi-/self-supervised learning with sparsely labeled data; NGA BIG: scene graph-based characterization of complex geospatial entities from satellite imagery; IARPA PMSIA: economic indicator prediction using raw measurements from satellite imagery; NGA BAD AC: complex geospatial entity detection from satellite imagery and CTTSO GRI-FN: automated mobile geolocation in GPS-denied environment. Co-PI for IARPA BRIAR: biometric recognition at distance and elevation; DARPA Providence: incremental learning using RF streaming data; IARPA DIVA: complex activity recognition from complex scenes and distributed cameras. Algorithm Lead for DARPA program on physics-informed machine learning for analysis of multi-spectral data; DARPA CAML: automated competency prediction of the perception capability of an agent; NGA HUNTER: automated geo-location system from metadata-free ground hand-held images; and NGA SMART: prototype system for visual content-based exploitation of open-source data. Technical advisor for IARPA TrojAI: automated detection of Trojan affected neural networks; DARPA SAIL-ON: automated novelty detection, characterization, and adaptation; and DARPA XAI: explainable deep learning for visual question answering. She received her Ph.D. in Electrical Engineering from the University of Tennessee in 2008.
- X. Hu, X. Lin, M. Cogswell, Y. Yao, S. Jha, and C. Chen, “Trigger hunting with a topological prior for Trojan detection”, ICLR 2022.
- M. Ye, X. Lin, G. Burachas, A. Divakaran, Y. Yao, “Hybrid consistency training with prototype adaptation for few-shot learning”, CVPRw 2022.
- K. Alipour, A. Ray, X. Lin, M. Cogswell, J. Schulze, Y. Yao, and G. Burachas, “Improving users’ mental model with attention-directed counterfactual edits”, Applied AI Letters, 2021
- Y. Gong, X. Lin, Y. Yao, T. G. Dietterich, A. Divakaran, and M. Gervasio, “Confidence calibration for domain generalization under covariate shift”, ICCV 2021, arXiv:2104.00742
- H. A. Ayyubi, Y. Yao, and A. Divakaran, “Progressive growing of neural ODEs”, ICLRw 2020.
- K. Alipour, J. P. Schulze, Y. Yao, A. Ziskind, and G. Burachas, “A Study on Multimodal and Interactive Explanations for Visual Question Answering”, AAAIw 2020.
- P. Ghosh, Y. Yao, L. S. Davis, and A. Divakaran, “Stacked spatiotemporal graph convolutional networks for action segmentation”, WACV 2020.
- C.-H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Camera handoff with adaptive resource management for multi-camera multi-object tracking”, Image and Vision Computing, vol. 28, no. 6, pp. 851-864, Jun. 2010.
- Y. Yao, C.-H. Chen, A. Koschan, and M. Abidi, “Adaptive online camera coordination for multi-camera multi-target surveillance”, Computer Vision and Image Understanding, vol. 114, no. 4, pp. 463-474, Apr. 2010.
- Y. Yao, C.-H. Chen, B. Abidi, D. Page, A. Koschan, and M. Abidi, “Can you see me now? – sensor positioning for automated and persistent surveillance”, IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 1, pp. 101-115, Feb. 2010.
- C.-H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Camera handoff and placement for automated tracking systems with multiple omnidirectional cameras”, Computer Vision and Image Understanding, vol. 114, no. 2, pp. 179-197, Feb. 2010.
- Y. Yao, B. Abidi, and M. Abidi, “3D target scale estimation and target feature segmentation for size preserving tracking in PTZ video”, Int’l Journal of Computer Vision, vol. 82, no. 3, pp. 244-263, May 2009.
- B. R. Abidi, N. R. Aragam, Y. Yao, and M. A. Abidi, “Survey and analysis of multimodal sensor planning and integration for wide area surveillance”, ACM Computing Surveys, vol. 41, no. 1, pp. 1-36, Dec. 2008.
- D. Gao, Y. Yao, F. Pan, T. Yu, B. Yu, L. Guan, W. Dixon III, B. Yanoff, D. Hamilton, T.-P. Tian, G. Brooksby, and N. Krahnstoever, “Computer vision aided target linked radiation imaging”, IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, Jun. 2012.
- Y. Yao and G. Doretto, “Boosting for transfer learning with multiple auxiliary domains”, IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun. 2010.
- G. Doretto and Y. Yao, “Fast invariant descriptors based on moments of image features”, IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun. 2010.
- C.-H. Chen, Y. Yao, A. Drira, A. Koschan, and M. Abidi, “Cooperative mapping of multiple PTZ cameras in automated surveillance systems”, IEEE Conf. on Computer Vision and Pattern Recognition, Miami Beach, FL, Jun. 2009, pp. 1078-1084.
- Y. Yao, C.-H. Chen, B. Abidi, D. Page, A. Koschan, and M. Abidi, “Sensor planning for PTZ cameras using the probability of camera overload”, Int’l Conf. on Pattern Recognition, Tampa, FL, Dec. 2008
Recent publicationsmore +
This paper presents a new 3D time-space detector for small ships in single look complex (SLC) synthetic aperture radar (SAR) imagery, optimized for small targets around 5-15 m long that are unfocused due to target motion induced by ocean surface waves.
Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users’ understanding of those cases.
While literature has demonstrated great successes via representation learning, in this work, we show that improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process.
We present novel calibration solutions via domain generalization. Our core idea is to leverage multiple calibration domains to reduce the effective distribution disparity between the target and calibration domains for improved calibration transfer without needing any data from the target domain.
We introduce Hybrid Consistency Training to jointly leverage interpolation consistency, including interpolating hidden features, that imposes linear behavior locally and data augmentation consistency that learns robust embeddings against sample variations.
We propose novel Stacked Spatio-Temporal Graph Convolutional Networks (Stacked-STGCN) for action segmentation, i.e., predicting and localizing a sequence of actions over long videos.