SRI’s Computer Science Laboratory develops leading-edge tools and methods for areas including computer security, high-assurance systems, advanced user interfaces, computer networking, robotics, biotechnology and nanotechnology.
We improve the security of critical infrastructures that support power grids, refineries, financial systems, telecommunications and the Internet.
SRI is a world leader in formal verification of computer systems. We build and offer specifications for high-assurance systems software and hardware.
Bioinformatics and computational biology
We apply computational methods to study and develop tools, software, databases and ontologies to manage and analyze high-throughput, heterogeneous biological data.
We develop solutions that enable computer network attack, computer network exploitation, and intelligence, surveillance and reconnaissance (ISR) missions.
Deep-tech pioneer addresses 5G network security by leveraging its unique intellectual property and research, extensive experience and relationships with its spinout companies
NSF, DOD partner to advance 5G technologies and communications for U.S. military, government and critical infrastructure operators
NSF’s Convergence Accelerator awards 2 projects to SRI for 5G operators to securely operate anywhere and anytime
We are working to create the first automatic deductive synthesis of a unification algorithm. The program is extracted from a proof of the existence of an output substitution that satisfies a given logical specification. Tests are introduced via case analysis in the proof, and recursion by application of the mathematical induction principle. The automation of the proof is challenging because it combines full quantifier reasoning and induction in a single framework. The program obtained is simpler than those previously constructed manually. Improvements in theorem proving technology lead us to believe that deductive synthesis could become a viable part of software production. Also, we suggest that a theorem prover’s reasoning about itself can be regarded as a form of introspection, which may enable it to improve itself.
Interactive virtual environments are useful tools for hands-on learning or rehearsing of procedural tasks. However, task training applications typically provide a constrained course of action for the learner, forcing them down a single specific solution path. We discuss an approach in which the virtual environment is semantically instrumented in order to allow for the tracking of and reasoning about open-ended learner activity therein. Our approach leverages ontology-based knowledge representation which allows for a structured and meaningful description of virtual objects and of the learner actions that may be performed upon them. This is facilitated by the association of specific ontological classes with geometric components of the objects which populate a training exercise. These classes, together with their attributes, relationships and rules, characterize the environment and user actions in a readily understandable manner. As a result, a training system is able to observe the learner activity, render an assessment of that activity, and provide meaningful feedback to the learner. We also present an authoring tool which allows content developers to semantically annotate three-dimensional models for such an environment.
This paper presents a novel diagnosis cloud framework that enables the extraction and transfer of knowledge from one network to another. It also presents use cases and requirements. We present the implementation details of the diagnosis cloud framework for two specific types of models: topic models and Markov Logic Networks (MLNs).
Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels “going dark,” these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse “forward secrecy” design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
BACKGROUND: Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity. METHODOLOGY/PRINCIPAL FINDINGS: In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. CONCLUSIONS/ SIGNIFICANCE: We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other NTDs.
Every voting system examined over the past decade has had severe security vulnerabilities. Virginia’s government recently examined the AVS WinVote and learned that the vulnerabilities are more serious than any other voting system, allowing complete exploitation over a Wi-Fi network. The combination of vulnerabilities exhibits “weakness in depth,” rather than the “defense in depth” frequently suggested as a model. The lessons learned are applicable to other emerging technologies, including the Internet of Things.