ACML 2020 🇹🇭
  • News
  • Program

A foreground detection algorithm for Time-of-Flight cameras adapted dynamic integration time adjustment and multipath distortions

By Detong Chen

Abstract

There are two scenarios often appear in the use of a Time-of-Flight (ToF) camera. One is requiring dynamic adjustment of its integration time to avoid overexposure, the other is multipath distortions happen. In these two scenarios, the pixel values of depth map and intensity map will suddenly and greatly change, and it will effect ToF based applications that require foreground detection. Traditional foreground detection algorithms can not adapt to these scenarios well, since they are sensitive to the sudden large change of pixel values and the threshold of pixel values difference people pick. Therefore, this paper proposes a pixel-insensitive and threshold-free algorithm to deal with the above scenarios. It is an end-to-end model based on deep learning. It takes two intensity maps captured by a ToF camera as input, where one intensity map works as a background, and the other works as a contrast. Taking their actual differences, also called foreground, as a label. Then, using deep learning to learn how to detect foreground based on these inputs and labels. To learn the pattern, datasets are collected under various scenes by multiple ToF cameras, and the training datasets are enlarged through applying a series of random transformations on the foreground and introducing two-dimensional Gaussian noise. Experiments show the new algorithm can stably detect foreground under different circumstances including the two mentioned scenarios.