next up previous contents
Next: Average energy Up: Prior models for potentials Previous: Approximate symmetries   Contents

Mixtures of Gaussian process priors

Stochastic process priors have, compared to priors over parameters $\xi $, the advantage of implementing a priori knowledge explicitly in terms of the function values $v(x)$. Gaussian processes, in particular, always correspond to simple quadratic error surfaces, i.e., concave densities. Being technically very convenient, this is, on the other hand, a strong restriction. Arbitrary prior processes, however, can easily be built by using mixtures of Gaussian processes without loosing the advantage of an explicit prior implementation [32,33,63,67]. (We want to point out that using a mixture of Gaussian process priors does not restrict $v$ to a mixture of Gaussians.)

A mixture of $M$ Gaussian processes with component means $v_k$ and inverse component covariances $\lambda{\bf K}_k$ reads

\begin{displaymath}
p_0(v)
=\sum_k^M p(v,k)
=\sum_k^M p(k) \,p_0(v\vert k)
=\su...
...-\frac{\lambda}{2}<\!v-v_k\,\vert\,{\bf K}_k\,\vert\,v-v_k\!>}
\end{displaymath} (40)

with $Z_k$ = $\left(\det \frac{\lambda}{2\pi} {\bf K}_k\right)^{-\frac{1}{2}}$ and mixture probabilities $p(k)$. The parameter $\lambda $ plays the role of an inverse mixture temperature. Analogous to annealing techniques changing $\lambda $ allows to control the degree of concavity of the mixture [33,63].


next up previous contents
Next: Average energy Up: Prior models for potentials Previous: Approximate symmetries   Contents
Joerg_Lemm 2000-06-06