Appearance Change Prediction for Long-Term Autonomy in Changing Environments
Changing environments pose a serious problem to current robotic systems aiming at long term operation. While place recognition systems perform reasonably well in static or low-dynamic environments, severe appearance changes that occur between day and night, between different seasons or different local weather conditions remain a challenge.
We propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable.
State of the art approaches to place recognition will attempt to directly match two scenes even if they have been observed under extremely different environmental conditions. This is prone to error and leads to bad recognition results. Instead, we propose to predict how the query scene (the winter image) would appear under the same environmental conditions as the database images (summer). This prediction process uses a dictionary that exploits the systematic nature of the seasonal changes and is learned from training data.
Superpixel-based Appearance Change Prediction (SP-ACP)
SP-ACP is a first implementation of this idea. It combines a training and a prediction phase:Step I: SP-ACP training
During training, the images are first segmented into superpixels and a descriptor is calculated for each superpixel. These descriptors are then clustered to obtain a vocabulary of visual words for each condition. In a final step, a dictionary that translates between both vocabularies is learned. This can be done due to the known pixel-accurate correspondences between the input images.
Step II: SP-ACP prediction
SP-ACP results
The following videos accompanied our 2013 ECMR paper and demonstrate the prediction capabilities of the first version of the proposed system on the Nordland dataset. The original video footage was produced by NRKbeta and is available under a creative commons licence. For further results, please have a look at the publications at the bottom of this page.
Related Publications
()
()
()
()
()