14th International Conference on Runtime Verification
September 22 - September 25, 2014 Toronto, Canada
Kevin Driscoll
Fellow at Honeywell Labs, USA
Murphy Strikes Again (Slides)
Abstract:
An objective of a conference keynote is to provide some rationale and motivation for the conference: Why are we here? For this conference: Why do Runtime Verification? It must be for applications critical enough to warrant this additional expense to ensure that the application performs adequately in the presence of faults -- design faults and hardware faults. There is an interesting link between the latter and the former. In critical applications, there often is a higher density of faults in the fault-tolerance software than there is in the rest of the software! Three reasons for this are: (1) The higher density of complex conditional branches in this type of software. (2) The lack of understanding of all possible failure scenarios leading to vague or incomplete requirements. (3) This software is the last to be tested … when the funding and schedule are exhausted. My boss once said that "All system failures are caused by design faults." This is because, regardless of the requirements, critical systems should be designed to never fail. It is extremely rare for a critical system to fail in a way that was anticipated by the designers (e.g., redundancy exhaustion). NASA’s C. Michael Holloway observed: “To a first approximation, we can say that accidents are almost always the result of incorrect estimates of the likelihood of one or more things.” This keynote will explore the factors that lead to designers underestimating the possibility/probabilities of certain failures. Examples of rare, but actually occurring, failures will be given. These will include Byzantine faults, component transmogrification, "evaporating" software, and exhaustively tested software that still failed. The well known Murphy’s Law states that: "If anything can go wrong, it will go wrong." For critical systems, the following should added: "And, if anything can’t go wrong, it will go wrong anyway."
Assaf Schuster
Professor of Computer Science,
Computer Science Department, Technion, Israel
Monitoring Big, Distributed, Streaming Data (Slides)
Abstract:
More and more tasks require efficient processing of continuous queries over scalable, distributed data streams. Examples include optimizing systems using their operational log history, mining sentiments using sets of crawlers, and data fusion over heterogeneous sensor networks. However, distributed mining and/or monitoring of global behaviors can be prohibitively difficult. The naïve solution which sends all data to a central location mandates extremely high communication volume, thus incurring unbearable overheads in terms of resources and energy. Furthermore, such solutions require expensive powerful central platform, while data transmission may violate privacy rules. An attempt to enhance the naïve solution by periodically polling aggregates is bound to fail, exposing a vicious tradeoff between communication and latency. Given a continuous global query, the solution proposed in the talk is to generate filters, called safe zones, to be applied locally at each data stream. Essentially, the safe zones represent geometric constraints which, until violated by at least one of the sources, guarantee that a global property holds. In other words, the safe zones allow for constructive quiescence: There is no need for any of the data sources to transmit anything as long as all constraints are held with the local data confined to the local safe zone. The typically-rare violations are handled immediately, thus the latency for discovering global conditions is negligible. The safe zones approach makes the overall system implementation, as well as its operation, much simpler and cheaper. The saving, in terms of communication volume, can reach many orders of magnitude. The talk will describe a general approach for compiling efficient safe zones for many tasks and system configurations.
Jeannette Wing
President's Professor of Computer Science,
Computer Science Department, Carnegie Mellon University, USA
Formal Methods: An Industrial Perspective (Slides)
Abstract:
Formal methods research has made tremendous progress since the 1980s when a proof using a theorem prover was worthy of a Ph.D. thesis and a bug in a VLSI textbook was found using a model checker. Now, with advances in theorem proving, model checking, satisfiability modulo theories (SMT) solvers, and program analysis, the engines of formal methods are more sophisticated and are applicable and scalable: to a wide range of domains, from biology to mathematics; to a wide range of systems, from asynchronous systems to spreadsheets; and for a wide range of properties, from security to program termination. In this talk, will present a few Microsoft Research stories of advances in formal methods and their application to Microsoft products and services. Formal methods use, however, is not routine-yet-in industrial practice. So, I will close with outstanding challenges and new directions for research in formal methods.