Sponsors





Tutorial 2: Throwing Computer Vision Overboard: How to Handle Underwater Light Attenuation and Refraction

Anne Jordt, University of Kiel, Germany
Kevin Köser, GEOMAR Helmholtz Centre for Ocean Research, Germany

Besides professional survey cameras mounted to autonomous or remotely operated underwater vehicles, there is now a huge variety of DSLRs or action cameras for divers or even waterproof cell phones for taking photos or videos underwater. It might seem at first sight that putting a camera in a waterproof housing (with a glass port to look outside) would already allow to use it "in the usual way" e.g. for measuring, mapping and reconstruction.

However, there are several challenges that have to be addressed. First of all, due to different optical densities of air, glass and water, light rays can be refracted at the interface ("port"), which then violates the pinhole camera model. On top, the port can act as a lens itself and change the geometry of the focus surface. Second, the apparent transparency of water is valid merely for the blue/green parts of the visible spectrum, while red, infrared and virtually all other electromagnetic radiation is significantly attenuated or blocked completely, leading to distance-dependent color corruption. On top, back scattering and forward scattering of light can degrade image quality if not considered properly.

In this tutorial we provide an overview of the challenges and review approaches to solve them, in particular focused to geometric problems related to imaging models, single camera based structure from motion and mapping. We will start from the basics such that everybody should be able to follow, however for the geometric parts, attendees should have a basic understanding of classical multiple view geometry (standard camera calibration, projection matrix, epipolar geometry and ideally structure from motion).