14th International Conference on Runtime Verification

September 22 - September 25, 2014 Toronto, Canada

700px-downtown-explore-toronto-top
700px-Toronto_skyline_sailboat
700px-Toronto_ON_Toronto_Skyline2_modified
700px-FuturebigToronto
700px-Skyline_of_Toronto_viewed_from_Harbour
700px_Downtown

Program Phase (December 15, 2013 -> April 1, 2014)

 

In this phase, competitors provide programs to be shared between participants. Contributions follow one of the two forms below depending on whether one is interested in programs or traces.

 

For programs (Java and C):

 

Each contribution should be structured as follows.

 

  1. Program package

    1. program source code

    2. script to compile

    3. script to run

    4. explanation

  2. Specification package - a collection of files, each containing:

    1. a property that contains

      1. a formal representation of it

      2. informal explanation

      3. verdict, i.e., what is the evaluation of the property on the program

    2. instrumentation information

    3. explanation

 

Each specification consists of a list of properties, with instrumentation information, with explanation. The instrumentation information maps the events referred to in the properties to concrete program events.

A property consists of a formalization (automata, formula, etc), and informal description, and whether the program satisfies the property (i.e., what is the expected verdict).

Instrumentation is a mapping from concrete events (in the program) to abstract events (in the specification). For instance, if one considers the HasNext property on iterators, the mapping should indicate that the hasNext() event in the property refers to a call to the hasnext() method on an Iterator object. We allow for several concrete events to be associated to one abstract event.

 

Remark 1: Please take into consideration the fact that too comprehensive properties may refrain people from wanting to compete on this property.

 

Remark 2: Please avoid sending “non-deterministic” programs as it may interfere with verdict detection.

 

 

Remark 3: Each submitted benchmark should be stand-alone, i.e., it should not depend on any third-party program.

 

 

For Traces:

 

Contribution

 

  1. Trace

  2. Specification package - a collection of files, each containing:

    1. a property that contains

       

      1. a formal representation of it

      2. informal explanation

      3. verdict, i.e., what is the evaluation of the property on the program

    2. instrumentation information

    3. explanation

 

Instrumentation is a mapping from concrete events (in the specification) to abstract events (in the trace).

 

Traces are sequences of named records of the form:

NAME{

 

   field1 : value1,

   ...

   fieldn : valuen

 }

 

That is an event has a name and arguments each of which has a name and a value.

Logs will be accepted in several formats, given through the examples below (where an_event_name ranges over the set of possible event names, a_field_name  ranges over the set of possible field names, a_value  ranges over the set of possible runtime values.

 

  • In XML:

<log>

  <event>

   <name>an_event_name</name>

   <field>

     <name>a_field_name</name>

     <value>a_value</value>

   </field>

   <field>

     <name>a_field_name</name>

     <value>a_value</value>

   </field>

 </event>

 <event>

   <name>EVR</name>

   <field>

     <name>a_field_name</name>

     <value>a_value</value>

   </field>

   <field>

     <name>a_field_name</name>

     <value>a_value</value>

   </field>

 </event>

</log>

 

  • In CSV format:

an_event_name, a_field_name = a_value, a_field_name = a_value

an_event_name, a_field_name = a_value, a_field_name = a_value

 

  • In JSON format:

an_event_name

a_field_name = a_value

a_field_name = a_value

 

an_event_name

a_field_name = a_value

a_field_name = a_value