Dienstag, 5. September 2023, 14.00 Uhr
Verification of Multi-Objective Markov Models
- Tim Quatmann M.Sc. - Lehrstuhl Informatik 2 (Software Modeling and Verification)
- Ort: Informatikzentrum, E3, 2. Etage, Raum 9222
Probabilistic systems evolve based on environmental events that occur with a certain probability. For such systems to perform well, we are often interested in multiple objectives, i.e., quantitative performance measures like the probability of a failure or the expected time until task completion. Sometimes, these objectives conflict with each other: minimizing the failure probability possibly means completing the task takes longer. Compromises need to be found.
We consider Markov models — particularly Markov decision processes (MDPs) and Markov Automata (MAs). These state-based modeling formalisms describe a system in its random environment. Starting from an initial state, the transitioning behavior in MDPs is determined by probabilistic and nondeterministic choices. MAs further extend MDPs by exponentially distributed continuous time delays. Rewards can be attached to states or transitions to model system quantities such as energy consumption, productivity, or monetary costs. Objectives are formally specified by a mapping from (infinite) system executions to the value of interest, i.e., the total accumulated costs or the average energy consumption. The expected value of an objective is defined once the nondeterminism is resolved using a strategy — intuitively reflecting the choices of a system controller. Different strategies induce different expected objective values. Multi-objective verification of MDPs and MAs analyzes the interplay between the considered objectives and identifies which trade-offs between expected objective values are possible, i.e., achievable by some strategy.
We study practically efficient methods to compute the set of achievable solutions. For this, we establish a general framework and its instantiation for (undiscounted) total reachability reward objectives, long-run average reward objectives, and reward-bounded objectives. We propagate the errors made by approximative methods, yielding sound under- and over-approximations. We further consider multi-dimensional quantiles that ask under which reward constraints a given objective value is achievable. Finally, we investigate a setting in which the strategies must be simple, i.e., non-randomized and with limited memory access. All presented approaches are integrated into the state-of-the-art probabilistic model checker Storm. An extensive evaluation of this implementation on a broad set of multi-objective benchmarks shows that our approaches scale to large models with millions of states.
Es laden ein: die Dozentinnen und Dozenten der Informatik