Jump to main content
Automation Technology
Appearance Change Prediction
Automation Technology 

Appearance Change Prediction for Long-Term Autonomy in Changing Environments

Changing environments pose a serious problem to current robotic systems aiming at long term operation. While place recognition systems perform reasonably well in static or low-dynamic environments, severe appearance changes that occur between day and night, between different seasons or different local weather conditions remain a challenge.

We propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable.

State of the art approaches to place recognition will attempt to directly match two scenes even if they have been observed under extremely different environmental conditions. This is prone to error and leads to bad recognition results. Instead, we propose to predict how the query scene (the winter image) would appear under the same environmental conditions as the database images (summer). This prediction process uses a dictionary that exploits the systematic nature of the seasonal changes and is learned from training data.

Superpixel-based Appearance Change Prediction (SP-ACP)

SP-ACP is a first implementation of this idea. It combines a training and a prediction phase:

Step I: SP-ACP training

SP-ACP learning a dictionary between images under different environmental conditions (e.g. winter and summer).

During training, the images are first segmented into superpixels and a descriptor is calculated for each superpixel. These descriptors are then clustered to obtain a vocabulary of visual words for each condition. In a final step, a dictionary that translates between both vocabularies is learned. This can be done due to the known pixel-accurate correspondences between the input images.

Step II: SP-ACP prediction

SP-ACP predicting the appearance of a query image under different environmental conditions: How would the current winter scene appear in summer?

To predict a winter query image from winter to summer, it is first segmented into superpixels and a descriptor is calculated for each of these segments. With this descriptor each superpixel can be classified as one of the visual words from the vocabulary. This word image representation can then be translated into the vocabulary of the target scene (e.g. summer) through the dictionary learned during the training phase. The result of the process is a synthesized image that predicts the appearance of the winter query image in summer time.

SP-ACP results

The following videos accompanied our 2013 ECMR paper and demonstrate the prediction capabilities of the first version of the proposed system on the Nordland dataset. The original video footage was produced by NRKbeta and is available under a creative commons licence. For further results, please have a look at the publications at the bottom of this page.

Related Publications

?. ()

?. ()

?. ()

?. ()

?. ()