Models for Probabilistic Programs with an Adversary

While working on a Hoare logic for probabilistic programs [1], we encountered an interesting choice for how to define the operational semantics of the underlying probabilistic imperative language PRIMP. We could either define commands as mapping from states to states and then “lift” those commands to apply to distributions, or define those commands over state distributions as a whole. Consider, for example, the conditional if b then c1 else c2. We could either apply this to all the states in the distribution’s support, where b would evaluate to true or false, or we could apply c1 to the entire sub-distribution where b was true and c2 to the complementary sub-distribution. We showed in our Coq development that both approaches were equivalent. Interestingly, when we looked at the non-deterministic choice c1 ⊔ c2 this wasn’t true at all. If we reduced c1 ⊔ c2 to c1 or c2 for the whole distribution, we restricted the non-determinism from reacting to the random events that had already occurred. And this allowed us to verify many properties that wouldn’t hold if we reduced c1 ⊔ c2 at the state level.

We soon realized that the distinction between these evaluation strategies corresponded to the distinction between the Arthur-Merlin (AM) and Interactive Polynomial (IP) complexity classes. AM uses public coins, meaning that the adversary knows the outcome of every random event. By contrast, in IP the hidden coins are used to thwart the prover, as in the simple protocol for verifying that two graphs are non-isomorphic by asking the prover to identify which of the graphs was randomly selected.

It turns out that there are a variety of interesting operational semantics for programs combining probability and non-determinism (we’ve identified eight so far) which are appropriate for domains ranging from security to game theory. All of these can be thought of as encoding what information is available to a non-deterministic adversary. We identified a few of these models in a recent ICFP poster competition.

Five different models of adversary strength.
Five different models of adversary strength.

Which raises two questions: Which models are worth exploring in detail and how many are we missing?

Models for Probabilistic Programs with an Adversary: Extended Abstract, Slides

Robert Rand and Steve Zdancewic, University of Pennsylvania

[1] Rand, R. and S. Zdancewic, “VPHL: A verified partial-correctness logic for probabilistic programs”, Mathematical Foundations of Programming Semantics, 2015.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.