We are working to create the first automatic deductive synthesis of a unification algorithm. The program is extracted from a proof of the existence of an output substitution that satisfies a given logical specification. Tests are introduced via case analysis in the proof, and recursion by application of the mathematical induction principle. The automation of the proof is challenging because it combines full quantifier reasoning and induction in a single framework. The program obtained is simpler than those previously constructed manually. Improvements in theorem proving technology lead us to believe that deductive synthesis could become a viable part of software production. Also, we suggest that a theorem prover’s reasoning about itself can be regarded as a form of introspection, which may enable it to improve itself.
Cyber & formal methods publications
Interactive virtual environments are useful tools for hands-on learning or rehearsing of procedural tasks. However, task training applications typically provide a constrained course of action for the learner, forcing them down a single specific solution path. We discuss an approach in which the virtual environment is semantically instrumented in order to allow for the tracking of and reasoning about open-ended learner activity therein. Our approach leverages ontology-based knowledge representation which allows for a structured and meaningful description of virtual objects and of the learner actions that may be performed upon them. This is facilitated by the association of specific ontological classes with geometric components of the objects which populate a training exercise. These classes, together with their attributes, relationships and rules, characterize the environment and user actions in a readily understandable manner. As a result, a training system is able to observe the learner activity, render an assessment of that activity, and provide meaningful feedback to the learner. We also present an authoring tool which allows content developers to semantically annotate three-dimensional models for such an environment.
Keywords: Artificial Intelligence, Artificial Intelligence Center, AIC
Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels “going dark,” these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse “forward secrecy” design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity.
In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. CONCLUSIONS/ SIGNIFICANCE: We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other NTDs.
Every voting system examined over the past decade has had severe security vulnerabilities. Virginia’s government recently examined the AVS WinVote and learned that the vulnerabilities are more serious than any other voting system, allowing complete exploitation over a Wi-Fi network. The combination of vulnerabilities exhibits “weakness in depth,” rather than the “defense in depth” frequently suggested as a model. The lessons learned are applicable to other emerging technologies, including the Internet of Things.
This paper investigates the foundational differences and the impacts on the analysis when using models with discrete time and models with dense time.
Though depression is a common mental health problem with significant impact on human society, it often goes undetected. We explore a diverse set of features based only on spoken audio to understand which features correlate with self-reported depression scores according to the Beck depression rating scale. These features, many of which are novel for this task, include (1) estimated articulatory trajectories during speech production, (2) acoustic characteristics, (3) acoustic-phonetic characteristics and (4) prosodic features. Features are modeled using a variety of approaches, including support vector regression, a Gaussian backend and decision trees. We report results on the AVEC-2014 depression dataset and find that individual systems range from 9.18 to 11.87 in root mean squared error (RMSE), and from 7.68 to 9.99 in mean absolute error (MAE). Initial fusion brings further improvement; fusion and feature selection work is still in progress.
We present an approach for detecting sensor spoofing attacks on a cyber-physical system. Our approach consists of two steps. In the first step, we construct a safety envelope of the system. Under nominal conditions (that is, when there are no attacks), the system always stays inside its safety envelope. In the second step, we build an attack detector: a monitor that executes synchronously with the system and raises an alarm whenever the system state falls outside the safety envelope. We synthesize safety envelopes using a modifed machine learning procedure applied on data collected from the system when it is not under attack. We present experimental results that show effectiveness of our approach, and also validate the several novel features that we introduced in our learning procedure.