[@twocolumnfalse

**J. C. Lemm**

**February 2, 2000**

Institut für Theoretische Physik I, Universität Münster, 48149 Münster, Germany

Using a new Bayesian method
for solving inverse quantum problems,
potentials of quantum systems
are reconstructed from coordinate measurements in
non-stationary states.
The approach is based on two basic inputs:
1. a likelihood model,
providing the probabilistic description of the measurement process
as given by the axioms of quantum mechanics, and
2. additional *a priori* information
implemented in form of stochastic processes over potentials.

03.65.-w, 02.50.Rj, 02.50.Wp ]

The first step to be done when applying quantum mechanics
to a real world system is the reconstruction of its Hamiltonian
from observational data.
Such a reconstruction, also known as inverse problem,
constitutes a typical example of empirical learning.
Whereas the determination of potentials
from spectral and from scattering data
has been studied in much detail in
inverse spectral and inverse scattering theory
[1,2],
this Paper describes
the reconstruction of potentials
by measuring particle positions in coordinate space
for finite quantum systems in *time-dependent* states.
The presented method can easily be generalized to
other forms of observational data.

In the last years much effort has been devoted
to many other practical empirical learning problems,
including, just to name a few,
prediction of financial time series,
medical diagnosis, and image or speech recognition.
This also lead to a variety of new learning algorithms,
which should in principle also be applicable
to inverse quantum problems.
In particular, this Paper shows how the
*Bayesian* framework [3]
can be applied to solve
problems of inverse time-dependent quantum mechanics (ITDQ).
The presented method generalizes a recently introduced approach
for stationary quantum systems
[4,5].
Compared to stationary inverse problems,
the observational data in time-dependent problems
are more indirectly related to potentials,
making them in general more difficult to solve.

Specifically, we will study the following type of observational data: Preparing a particle in an eigenstate of the position operator with coordinates at time , we let this state evolve in time according to the rules of quantum mechanics and measure its new position at time , finding a value . Continuing from this measured position , we measure the particle position again at time , and repeat this procedure until data points at times have been collected. We thus end up with observational data of the form = , where is the result of the -th coordinate measurement, = the time interval between two subsequent measurements and the coordinates of the previous observation (or preparation) at time .

We will discuss in particular systems with time-independent Hamiltonians of the form = , consisting of a standard kinetic energy term and a local potential = , with denoting the position of the particle. In that case, the aim is the reconstruction of the function from observational data . (The restriction to local potentials simplifies the numerical calculations. Nonlocal Hamiltonians can be reconstructed similarly.)

Setting up a Bayesian model
requires the definition of two probabilities:
1. the probability
to measure data given potential ,
which,
for considered fixed,
is also known as the *likelihood* of ,
and 2. a prior probability
implementing available *a priori* information
concerning the potential to be reconstructed.

Referring to a *maximum a posteriori approximation* (MAP)
we understand those potentials
to be solutions of the
reconstruction problem,
which maximize , i.e., the
*posterior* probability of given all available data .
The basic relation is then Bayes' theorem,
according to which
.

One possibility is to choose a parametric ansatz for the potential . In that case, an additional prior term is often not included (so the MAP becomes a maximum likelihood approximation). In the following, we concentrate on nonparametric approaches, which are less restrictive compared to their parametric counterparts. Their large flexibility, however, makes it essential to include (nonuniform) priors. Corresponding nonparametric priors are formulated explicitly in terms of the function [6]. Indeed, nonparametric priors are well known from applications to regression [7], classification [8], general density estimation [9], and stationary inverse quantum problems [4,5]. It is the likelihood model, discussed next, which is specific for ITDQ.

According to the axioms of quantum mechanics
the probability that a particle is found at
position at time ,
provided the particle has been at at time ,
is given by

are matrix elements of the time evolution operator

(3) |

Clearly, it is straightforward to modify (1) for measuring observables different from the particle position. It is also interesting to note that the transition probabilities (1) define a Markoff process with = . For real eigenfunctions , i.e., for a real Hamiltonian with real boundary conditions, they obey the relation = . It follows that the detailed balance condition, = , is fulfilled for a uniform , which therefore represents the stationary state of the Markoff process of repeated position measurements.

Having defined the likelihood model of ITDQ,
in the next step a prior for has to be chosen.
A convenient
nonparametric prior is a Gaussian

If available, it is useful to include
some information about the ground state energy ,
which helps to determine the depth of the potential.
This can, for example,
be a noisy measurement of the ground state energy
which, assuming Gaussian noise,
is implemented by

Combining
(5)
and
(6)
with (1) for
repeated coordinate measurements
starting from an initial position ,
we obtain for the posterior (7),

with

The functional derivative can, according to Eq. (4), be obtained from . The still required and can then be found by calculating the functional derivative of the eigenvalue equation = . Using

(12) |

Collecting the results, gives

Inserting Eq. (13) for = 0 in Eq. (10) and Eq. (15) in Eq. (11) a MAP solution for the potential can be found by iterating the stationarity equation (8) numerically on a lattice. Clearly, such a straightforward discretization can only be expected to work for a low-dimensional variable. Higher dimensional systems usually require additional approximations [5].

As the next step,
we want to check the numerical feasibility of a nonparametric reconstruction
of the potential
for a one-dimensional quantum system.
For that purpose, we choose
a system with the true potential

Besides a noisy energy measurement of the form (6)
we include a Gaussian prior (5)
with a smoothness related inverse covariance

is optimized with respect to , , by maximizing the ``extended likelihood'' . Finally, the stationarity equation (8) is solved by iterating according to

(19) | |||

The resulting nonparametric ITDQ solution (see Fig. 3), is a reasonable reconstruction of , and clearly better than the best parametric approximation . It is only the flat area near the right border where, due to missing and unrepresentative data, the reconstruction differs significantly from the true potential.

Fig. 4 compares the sum over empirical transition probabilities as derived from the observational data with the corresponding true = and reconstructed = . Due to the summation over data points with different , the quantities shown in Fig. 4 do not present the complete information which is available to the algorithm. Hence, Fig. 5 depicts the corresponding quantities for a fixed . In particular, Fig. 5 compares the reconstructed transition probability (1) with the corresponding empirical and true transition probabilities for a particle having been at time at position =1. The ITDQ algorithm returns an approximation for all such transition probabilities.

Figs. 4 and 5 show, that the reconstructed tends to produce a better approximation of the empirical probabilities than the true potential . Indeed, the error on the data or negative log-likelihood, = , being a canonical error measure in density estimation, is smaller for than for . A smaller , i.e., a lower influence of the prior, produces a still smaller error . At the same time, however, the reconstructed potential becomes more wiggly for smaller , being the symptom of the well known effect of ``overfitting''. The (true) generalization error = [with uniform ], on the other hand, can never be smaller for the reconstructed than for . As it is typical for most empirical learning problems, the generalization error shows a minimum as function of . It is this minimum which gives the optimal value for . Knowledge of the true model allows in our case to calculate the generalization error exactly. If, as usual, the true model is not known, classical cross-validation [6] and bootstrap [11] techniques can be used to approximate the generalization error as function of empirically.

Alternatively to optimizing or other hyperparameters one can integrate over them [10]. Similarly, studying the feasibility of a Bayesian Monte Carlo approach, contrasting the MAP approach of this paper, would certainly be interesting.

In summary, this Paper has presented a new method to solve inverse problems for time-dependent quantum systems. The approach, based on a Bayesian framework, is able to handle quite general types of observational data. Numerical calculations proved to be feasible for a one dimensional model.