Iterative simulation techniques are becoming standard tools in Bayesian statistics, a notable example being the Gibbs sampler, whose draws form a Markov chain. Standard practice is to run the simulation until convergence is approached in the sense of the draws appearing to be stationary. At this point, the set of stationary draws can be used to provide an estimate of the target distribution. However, when the distributions involved are normal and the draws form a Markov chain, the target distribution can be reliably estimated by maximum likelihood (ML) using draws before their convergence to the target distribution. This fact suggests that the normal-based ML estimates can be exploited to estimate the mean and covariance matrix of an approximately normal target distribution before convergence is reached, and that these estimates can be used to define a restarting distribution for the simulation. Here, we describe the needed technology and explore its relevance to practice. The tentative conclusion is that the Markov-Normal restarting procedure can be computationally advantageous when the target distribution is nearly normal, especially in massively parallel or distributed computing environments where many sequences can be run for the same effective cost as one sequence.
Key words: the EM algorithm; the Gibbs sampler; Markov Chain Monte Carlo; the multivariate t distribution; time series.