-A A +A


Reachability analysis of continuous and discrete time models is a hard problem that has seen much progress in the last decades. In many cases the problem has been reduced to bisimulations with a number of limitations in the nature of the dynamics, soundness, or time horizon.
In this research we focus on sound safety verification of Unbounded-Time Linear Time-Invariant (LTI) models with inputs, over both continuous and discrete time, using reachability analysis. We achieve this using Abstract Acceleration: this over-approximates the reach tube of a model over unbounded time by using abstraction. The technique is applied to a number of discrete- and continuous-time models and the results show good performance when compared to state-of-the-art tools.

Max-PlusLinear (MPL) models are a class of discrete-event systems used to characterize the dynamics of the timing related to successive events that synchronize autonomously.

In this work we integrate the use of model checking techniques (for parameter synthesis over the model) with data-based approaches (for parametric Bayesian inference) in order to compute a confidence, based on observed data collected from the system, that the system satisfies a given specification.

The future is smart: smart meters, smart buildings, smart cities. Digital technologies integrated into everyday life will give us and our descendants an environmentally sustainable and comfortable life.
Automation is key: devices capable of self-diagnosing faults and send reports to the maintenance team; offices able to decide the best environment needed for your work; cities that manage the power demand to allow only clean energy use.

Our research focuses on two aspects of this framework: building automation systems, solar panels and smart metering.

We are interested in applications of and algorithmic improvements to CounterExample Guided Inductive Synthesis (CEGIS). We are exploring and developing the use of different learning and verification techniques within the CEGIS architecture, and have an upcoming paper at CAV 2018 on CEGIS(T), a CEGIS algorithm that incorporates theory solvers for improved synthesis of constants.We are also interested in the application of the CEGIS paradigm to a broad range of applications, and have several recent works focusing on the application of CEGIS to the synthesis of safe controllers for Linear-Time-Invariant systems.

Reinforcement Learning (RL) is a known architecture for synthesising policies for Markov Decision Processes (MDP). We work on extending this paradigm to the synthesis of ‘safe policies’, or more general of policies such that a linear time property is satisfied. We convert the property into an automaton, then construct a product MDP between the automaton and the original MDP. A reward function is then assigned to the states of the product automaton, according to accepting conditions of the automaton. With this reward function, RL synthesises a policy that satisfies the property: as such, the policy synthesis procedure is `constrained' by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP. We evaluate the performance of the algorithm on numerous numerical examples.


Department of Computer Science

Wolfson Building, Parks Rd, Oxford OX1 3QD

T: +44 (0) 18656 10767


Education - This is a contributing Drupal Theme
Design by WeebPal.