Visual Odometry is a key technology for robust and accurate navigation of unmanned aerial vehicles in a number of low altitude applications (<120 m), particularly in environments where access to a global positioning system is possible, but not guaranteed. Navigating via vision alone reduces dependence on a global positioning system and other global navigation satellite systems, enhancing navigation robustness even in the presence of jamming, spoofing or long dropouts. To date, however, most demonstrations of visual odometry are in close proximity to the ground or other structures and are often implemented as a monocular camera combined with inertial sensing, rather than vision alone, to account for scale drift. Stereo visual odometry has received little attention for applications beyond 30 m altitude due to the generally poor performance of stereo rigs for these extremely small baseline-to-depth ratios, otherwise termed long-range stereo. This paper demonstrates stereo visual pose estimation at altitudes of up to 120 m above ground level on a small fixed-wing unmanned aerial vehicle by adapting the traditional stereo visual odometry paradigm to explicitly account for inaccurate triangulation and poorly observed scale from the stereo baseline. In addition, issues related to long-range stereo such as biased sensing are investigated to justify the approach, and a novel bundle adjustment algorithm is presented capable of handling vibration induced structural deformation between the cameras. This is achieved by continually optimizing the stereo transform within a set of inequality bounds. Results are presented demonstrating the algorithm on field-gathered data from a 2 m wingspan fixed-wing unmanned aerial vehicle flying at 30-120 m altitude over a 6.5 km trajectory.
from robot theory http://ift.tt/1Gl9fkH
0 comments:
Post a Comment