Tuesday 28 April 2015

Building Location Models for Visual Place Recognition

This paper deals with the task of appearance-based mapping and place recognition. Previously, the scope of a location generally varied between either using discrete poses or loosely defined sequences of poses, facing problems related to perceptual aliasing and path invariance respectively. Here, we present a unified framework for defining, modelling and recognizing places in a way which is directly related to the underlying structure of features in the environment. A covisibility map of the environment is incrementally maintained over time, where visual landmarks represent nodes in a graph which are connected if seen together. When queried, relevant places are retrieved as clusters from this map, and a novel probabilistic observation model is used to evaluate place recognition. Place retrieval is able to adapt to a given query and also inherently cope with trajectory variations, due to the use of the landmark covisibility structure. In addition, the chosen generative model is developed in a way which is robust to observation errors, mapping errors, perceptual aliasing, and parameter sensitivity. Validation is provided through a variety of tests using real-world datasets, which compare the behaviour of the proposed approach to other representative state-of-the-art methods (namely FAB-MAP and SeqSLAM).



from robot theory http://ift.tt/1JQPfoC

0 comments:

Post a Comment