A Semi-Automatic Method for Resolving Occlusions in Augmented Reality

Computing the viewpoints

Our approach to motion computation takes advantage of 3D knowledge on the scene as well as 2D/2D correspondences over time. The viewpoints are recovered by minimizing :
A complete explanation is given here.

What you have to know to understand the estimation of the uncertainty on the viewpoints is that the viewpoints p are recovered by minimizing a function (p).

Estimating the uncertainty on the viewpoints

It is reasonable to assume that values of   almost as low as (p*) would satisfy us as much as (p*). Indeed, we generally observe that slightly different viewpoints give similar reprojection errors. This gives rise to an -indifference region in p space described by :

In a sufficiently small neighborhood of p*, we may approximate   by means of the first few terms of its Taylor equation :

where H(p*) is the hessian of  computed at p = p*.
As p* is the minimum of  , the gradient is null at the optimum   so :

The indifference region is then defined by :

which is the equation of a 6-dimensional ellipsoid. The next figure shows the indifference regions for the translation parameters over a sequence :