Abstract
In this talk, we present the teleportation algorithm and the Markov chain importance sampling algorithm. These two algorithms share the common principle of obtaining a chain targeting a given distribution from a simple transformation of a Markov chain targeting an auxiliary distribution. Markov chain importance sampling is based on decimation and replication procedures, which allow us to move between modes while replicating points in the vicinity of the modes.
The teleportation algorithm allows us to diversify the points around the modes, and therefore acts as a complement to Markov chain importance sampling. We show that, under weak conditions, essential properties such as the law of large numbers, geometric ergodicity or the central limit theorem can be transported through these two operations. We will also present some ideas for sequentially combining these two algorithms in order to progressively bring a Markov chain aiming at a standard distribution to a chain aiming at the target distribution, through a sequence of intermediate laws.