大工至善|大学至真分享 http://blog.sciencenet.cn/u/lcj2212916

博文

[转载]【无人机】【2012.01】微型直升机的视觉导航

已有 175 次阅读 2020-9-25 16:38 |系统分类:科研笔记|文章来源:转载

本文为瑞士苏黎世联邦理工学院(作者:Stephan M. Weiss)的博士论文,共222页。

 

本文主要研究微型直升机在大型环境和初始未知环境中的状态估计和传感器自校准问题。当正确的车辆状态估计允许长时间导航时,传感器自校准使车辆成为一个上电即运行的系统,而无需事先和离线校准步骤。由于机载飞行器(特别是微型直升机)固有的有效载荷限制,以及在室内或城市环境中全球定位信息的有限可用性,我们将重点放在基于视觉的方法上以实现我们的目标。尽管我们把重点放在处理视觉线索上,但我们也分析了一种通用的模块化方法,它允许使用各种不同的传感器类型。

 

在基于视觉的方法中,摄像机需要特别注意。与其他传感器相比,视觉传感器通常会产生大量信息,这些信息需要复杂的策略才能在实时和计算受限的平台上使用。我们的研究表明,基于地图的视觉里程计策略源自运动框架的最新结构,特别适用于局部稳定、位姿控制的飞行。针对原框架的工作,分析和讨论了漂移和鲁棒性问题。提出了一种基于光流和惯性约束的无地图策略,以减轻其它与地图相关的问题,如地图丢失和初始化过程。在基于地图的方法中,摄像机充当实时姿态传感器,而在无地图方法中,摄像机充当实时体速度传感器,能够对飞行器进行速度控制。研究表明,这两种方法可以结合在一个统一和互补的框架中,以克服彼此的缺点。在所有情况下,算法都能够在计算受限的平台上实时运行。

 

针对飞行器导航,研究了一种统计和模块化的传感器融合策略。我们专注于车辆的最佳状态估计,以避免额外的、计算量大的控制方法。特别地,我们讨论了由摄像机测量的任意比例的位姿或体速度的度量恢复以及基于视觉策略所产生的不同漂移的恢复。我们的模块化方法允许将摄像机视为黑匣子传感器,这使得我们的传感器融合策略的计算复杂度保持不变,适用于大型环境和长时间导航。我们系统的模块化还允许添加各种不同类型的传感器。

 

基于一个简明的非线性可观测性分析,利用微分几何,我们分离了一个给定传感器设置系统的漂移状态和不可观测模式,并对可观测模式提供了广泛的见解。讨论了两者的可观测性、车辆的在线状态估计和系统的自标定。后者包括所有传感器间的校准状态,也包括固有惯性特性。对于不可观测模态,通过分析系统的连续对称性和不可分辨区域,突出了它们在状态空间中的维数。在此基础上,我们提出了组合多种传感器的效果,不仅可以估计车辆状态,而且可以恢复系统的差分漂移和(内部)传感器标定状态。

 

对非线性和时间连续的多传感器系统进行理论分析是必要的,但并没有为所实现的线性化和时间离散系统的可观测性提供充分的条件。因此,我们在广泛的模拟中分析了收敛行为。通过在实际硬件上实现并在实际数据上进行测试,进一步验证和测试了我们的方法。我们在室内以受控的方式进行这些测试,同时对准确的地面真实情况进行验证。此外,我们在室外大环境下进行测试。这些室外实验证明了所提出的将摄像机转换为黑盒传感器的实时和车载视觉算法的有效性。此外,还展示了整个传感器融合框架的能力,可以将飞行器转换为可供实际、大规模和长期运行的通电即运行系统。从理论分析中获得的见解用于处理传感器漂移和传感器丢失,包括在飞行过程中完全不同的传感器套件之间切换,因为在室内-室外过渡期间需要这样做。本文将本论文总结为一项深入的工作,从最初的传感器准备到详细的理论分析,再到精心的设计实施,最后到成功的未来自主飞行器的实际测试。

 

In thisdissertation, we study the issues of vehicle state estimation and sensorself-calibration which arise while navigating a micro helicopter in large andinitially unknown environments. While proper vehicle state estimation allows long-term navigation, sensor self-calibrationrenders the vehicle a power-onand-go system without the need of prior and offlfflffline calibration steps. Dueto the inherent payload limitation of airborne vehicles like, and especially, amicro helicopter and the restricted availability of global localization informationin indoor or urban environments, we focus on vision based methods in order to achieve our goals. Despite this focus on processingvisual cues, we analyze a general and modular approach which allows the use ofa variety of difffferent sensor types.

In vision based methods, the camera needs special attention. In contrastto other sensors, vision sensors typically yield vast information that needscomplex strategies to permit use in real-time and on computationally constrainedplatforms. We show that our map-based visual odometry strategy derived from astate-of-the-art structure-from-motion framework is particularly suitable forlocally stable, pose controlled flflight. Issues concerning drifts androbustness are analyzed and discussed with respect to the original frame work.A map-less strategy based on optical flflow and inertial constraints ispresented in order to mitigate other map-related issues such as map losses andinitialization procedures. While in the map-based approach the camera acts as areal-time pose sensor, in the map-less approach it acts as a real-timebody-velocity sensor capable of velocity-control the airborne vehicle. We showthat both approaches can be combined in a unifying and complementary frameworkto mitigate each others weaknesses. In all cases, the algorithms are capable ofrunning in real-time on a computationally constrained platform. For aerialvehicle navigation we study a statistical and modular sensor fusion strategy.We focus on the optimal state estimation of the vehicle to avoid additional,computationally heavy control approaches. In particular, we discuss the metricrecovery of an arbitrarily scaled pose or body-velocity measured by the cameraas well as the recovery of difffferent drifts the vision based strategiessuffffer from. Our modular approach allows to treat the camera as a black boxsensor. This renders the computational complexity of our sensor-fusion strategyconstant and applicable to large environments and long-term navigation. Themodularity of our system also allows the addition of a variety of difffferentsensor types. Based on a concise non-linear observability analysis usingdifffferential geometry we isolate the drift states and unobservable modes of asystem with a given sensor setup and provide extensive insight to theobservable modes. We discuss the observability of both, the on-line stateestimation of the vehicle, and self-calibration of the system. The latterincludes all inter-sensor calibration states including the intrinsic inertialcharacteristics. For the unobservable modes, we highlight their dimensions inthe state space by analyzing the continuous symmetries and indistinguishableregions of the system. Based on this, we present the effffects of combining avariety of sensors in order to not only estimate the vehicle state but also torecover difffferent drift and (inter-)sensor calibration states of the system.

The theoreticalanalysis of the non-linear and time continuous multi-sensor system is necessary,but does not provide a suffiffifficient condition for the observability of theimplemented linearized and time discrete system. Thus, we analyze theconvergence behavior in extensive simulations. Our approach is furtherverifified and tested by implementing it on real hardware and testing it onreal data. We perform these tests indoor in a controlled way while havingaccurate ground truth for verifification. Furthermore we perform tests outdoorin a large environment. These outdoor tests prove the validity of the presentedreal-time and on-board vision algorithms converting the camera into black boxedsensor. Also, they show the capability of the entire sensor fusion framework toconvert an aerial vehicle into a power-on-and-go system for real-world,large-scale and long-term operations. The insights gained from the theoreticalanalysis are used to handle sensor drifts and sensor dropouts up to switchingbetween completely difffferent sensor suites in flflight as it is requiredduring indoor-outdoor transitions. This summarizes the dissertation as athorough work going from initial sensor preparation to detailed theory throughcareful implementation and down to successful real-world testing for tomorrowsautonomous aerial vehicles.

 

1. 引言

2. 作为运动传感器的摄像机

3. 模块化传感器融合:状态估计与传感器自校准

4. 结果:性能评估

5. 讨论与结论

附录真实平台上的定量结果


更多精彩文章请关注公众号:205328s611i1aqxbbgxv19.jpg





http://blog.sciencenet.cn/blog-69686-1252071.html

上一篇:[转载]【计算机科学】【2019.05】高维空间中的路径规划算法
下一篇:[转载]【无人机】【2019.07】未来无线网络中多个无人机基站的定位

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2020-10-29 13:12

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部