next up previous contents
Next: Approximate symmetries Up: Prior models for potentials Previous: The need for a   Contents


Gaussian processes and smooth potentials

In this section we include, in addition to the likelihood terms, a priori information in form of a prior density $p_0(v)$. Having specified $p_0(v)$ a Bayesian approach aims at calculating the predictive density (3). Within a maximum posterior approximation the functional integral in Eq. (3) can be calculated by Monte Carlo methods or, as we will do in the following, in saddle point approximation, i.e., by selecting the potential with maximal posterior. The posterior density of $v$ is according to Eq. (1) proportional to the product of training likelihood and prior

\begin{displaymath}
p(v\vert D) \propto p_0(v) \prod_i <\vert\phi (x_i)\vert^2>
.
\end{displaymath} (33)

Hence, the maximum likelihood approximation we have discussed in the last section is equivalent to a maximum posterior approximation under the assumption of a uniform prior.

Technically the most convenient priors are Gaussian processes which we already have introduced in (8) for regression models. Such priors read for $v$,

\begin{displaymath}
p_0(v)
=
\left(\det \frac{\lambda}{2\pi}{\bf K}_0 \right)^\...
...ac{\lambda}{2}
<\!v-v_0\,\vert\,{\bf K}_0\,\vert\,v-v_0\!>}
,
\end{displaymath} (34)

with mean $v_0$, representing a reference potential or template for $v$, and real symmetric, positive (semi-)definite covariance operator $(1/\lambda){\bf K}_0^{-1}$, acting on potentials $v$ and not on wave functions $\phi_\alpha$. The operator ${\bf K}_0$ defines a scalar product and thus a distance measuring the deviation of $v$ from $v_0$. The most common priors are smoothness priors where ${\bf K}_0$ is taken as differential operator. (In that case ${\bf K}_0$ defines a Sobolev distance.) Examples of smoothness related inverse prior covariances are the negative Laplacian ${\bf K}_0$ = $-{\Laplace}$, which we have already met in Eq. (10), or operators with higher derivatives like a ``Radial Basis Functions'' prior with pseudo-differential operator ${\bf K}_0$ = $\exp{(-{\sigma_{\rm RBF}^2}{\Laplace}/2)}$.

Finally, we want to mention that also the prior density can be parameterized, making it more flexible. Parameters of the prior density, also known as hyperparameters, are in a Bayesian framework included as integration variables in Eq. (3), or, in maximum posterior approximation, in the maximization of Eq. (5) [22,26]. Hyperparameters allow to transform the point-like maxima of Gaussian priors to submanifolds of optimal solutions. For a Gaussian process prior, for example, the mean or reference potential $v_0$ and the covariance ${\bf K}_0^{-1}/\lambda$ can so be adapted to the data [32].


next up previous contents
Next: Approximate symmetries Up: Prior models for potentials Previous: The need for a   Contents
Joerg_Lemm 2000-06-06