Unsupervised learning of depth estimation and visual odometry for sparse light field cameras

Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021

Recommended citation: Digumarti, S. T., Daniel, J., Ravendran, A., Griffiths, R., Dansereau, D. G. (2021, September). "Unsupervised learning of depth estimation and visual odometry for sparse light field cameras." IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) https://roboticimaging.org/Papers/digumarti2021unsupervised.pdf

Abstract

While an exciting diversity of new imaging devices is emerging that could dramatically improve robotic perception, the challenges of calibrating and interpreting these cameras have limited their uptake in the robotics community. In this work we generalise techniques from unsupervised learning to allow a robot to autonomously interpret new kinds of cameras. We consider emerging sparse light field (LF) cameras, which capture a subset of the 4D LF function describing the set of light rays passing through a plane. We introduce a generalised encoding of sparse LFs that allows unsupervised learning of odometry and depth. We demonstrate the proposed approach outperforming monocular, stereo and conventional techniques for dealing with 4D imagery, yielding more accurate odometry and depth maps and delivering these with metric scale. We anticipate our technique to generalise to a broad class of LF and sparse LF cameras, and to enable unsupervised recalibration for coping with shifts in camera behaviour over the lifetime of a robot. This work represents a first step toward streamlining the integration of new kinds of imaging devices in robotics applications.