Computer Science Graduate Seminar
Thursday, February 13, 2020, 1:00pm
Parameter Synthesis in Markov Models
- Location: Computer Science Center, E3, Room 9222
- Speaker: Sebastian Junges M.Sc., Chair for Computer Science 2 (Software Modeling and Verification)
Markov models comprise states with probabilistic transitions. The analysis of these models is ubiquitous and studied in, among others, reliability engineering, artificial intelligence, systems biology, and formal methods. Naturally, their analysis crucially depends on the transition probabilities. Often, these probabilities are approximations based on data or reflect configurable parts of a modelled system. To represent the uncertainty about the probabilities, we study parametric Markov models, in which the probabilities are symbolic expressions rather than concrete values.
More precisely, we consider parametric Markov decision processes (pMDPs) and parametric Markov chains (pMCs) as special case. Substitution of the parameters yields classical, parameter-free Markov decision processes (MDPs) and Markov chains (MCs). A pMDP thus induces uncountably many MDPs. Each MDP may satisfy reachability and reward properties, such as "the maximal probability that the system reaches an `offline' state is less than 0.01%", or "the maximal expected energy consumption is less than 20 kWh." Lifting these properties to pMDPs yields fundamental problems asking:
- "Is there an induced MDP satisfying the property?" (feasibility), its dual
- "Do all induced MDPs satisfy the property?" (validity),
and advanced problems such as "What is a concise representation for all induced MCs satisfying the property?" We study these problems on a conceptual level, and design and implement both improved and novel algorithms.
On the conceptual side, a thorough discussion of the feasibility problem yields new results, such as:
- that answering various variants of the feasibility problem is — in terms of complexity — as hard as finding roots of a multivariate polynomial, and
- that these problems are tightly connected to the analysis of memoryless strategies in partially observable MDPs, a famous model in artificial intelligence.
Additionally, we introduce family MCs (fMCs), a subclass of pMCs with finitely many induced MCs. Among others, fMCs define fundamental problems underlying the quantitative analysis of software product lines and sketching of probabilistic programs.
On the algorithmic side,
- we present and analyse improved but previously known approaches, and combine them to meet practical needs. Their analysis inspired
- three new and orthogonal approaches, utilising advances in convex optimisation,
as well as adapting prominent ideas such as inductive synthesis and abstraction-refinement to the particular setting. All methods efficiently exploit advances in the off-the-shelf analysis of MDPs.
On the empirical side, the new methods improve the state-of-the-art considerably, handling hundreds of parameters and millions of states.
All the approaches we present have been implemented in open-source tools.
The computer science lecturers invite interested people to join.