Navigation using sensor fusion

Sensor fusion, also known as multi-sensor data fusion (MSDF), is the combination of sensory data or data derived from sensory data from different sources in order to achieve better information than would be possible when these sources are used individually. The term better in this case refers to the data and can mean: more accurate, noise tolerant, more complete, sensor failure tolerant or data with reduced uncertainty. There are many different issues that require consideration when performing sensor fusion such as data alignment, data association, fusion, inference and sensor management (Loebis et al. 2002). The fusion process can also be further categorized by the different levels at which it can take place. A commonly used categorisation identifies four fusion levels signal, pixel, feature and symbol and these are discussed in more detail in references (Loebis et al. 2002; Luo et al. 2002). All sensors available for underwater vehicle navigation have their own advantages and disadvantages. Sensor fusion techniques allow for the fusion of data from many sources to improve the overall navigation accuracy and reliability while taking advantage of the available sensors complementary attributes. A well established sensor fusion application is between a Doppler velocity log (DVL) and an inertial navigation system (INS) (Kinsey et al. 2006). This sensor fusion is used to combat the issue of INS integration drift: small errors in the measurement of acceleration and angular velocity are integrated into progressively larger errors in velocity, which is compounded into still greater errors in position. Inertial measurement units (IMU) typically employ another type of sensor such as DVL measurements and position measurements from GPS or acoustic navigation systems to correct for errors in the IMU state estimate and limit the effect of integration drift. Whitcomb et al. reported preliminary results from the first deployment of an early prototype of a combined long base line (LBL) acoustic network positioning and Doppler navigation (Whitcomb et al. 1999). This system was later extended upon by Kinsey et al. who indentified that solutions and experimental results for underwater vehicle navigation in the x and y horizontal plane were particularly rare in literature (Kinsey & Whitcomb 2004). The system developed, DVLNAV supports many of the sensors available on today's UUVs including DVL, LBL, compass, depth sensors, altimeters and GPS. Results demonstrated that the system provides more accurate navigation at a higher precision and update rate than LBL alone while also proving that accurate estimates of sound velocity, heading, and attitude data in computing underwater vehicle position significantly improves the accuracy of Doppler based navigation.

Loebis et al. published a review of MSDF and its application to UUV navigation (Loebis et al. 2002). It was concluded that accurate navigation cannot be performed by one navigation system alone and the best way to improve is by implementing MSDF between a number of complementary navigation systems. A method of cable tracking that utilizes MSDF between an INS, GPS, and vision based dead reckoning is also proposed but to the authors' knowledge no results for the system have been published to date. Nicosevici et al. presented a classification of currently used sensor fusion techniques that focuses on its application to UUV navigation (Nicosevici et al. 2004). Many of systems reviewed implement the extended Kalman filter for sensor data fusion. The main conclusions drawn from the literature for sensor fusion implementation is to first be aware of the goal of the sensor fusion (the improvement brought by the system ) and second be aware of the constraints imposed by the sensors involved (sensor data model etc.).

While vision based sensor fusion techniques are growing in popularity for terrestrial robot navigation applications (Dong-Xue et al. 2007; Jia et al. 2008) very little literature exists for underwater vision based sensor fusion as most of the navigation applications reviewed rely purely on optical information (Eustice 2005). One of the few underwater vision based sensor fusion techniques is proposed by Balasuriya et al. to tackle the issues of cable tracking when the cable becomes invisible to the camera for a short period of time and also correct cable selection in the presence of multiple possibilities (Balasuriya & Ura 2001). A combination of image data, a prior map of the cable location and inertial data are fused together in order to implement reliable cable tracking. Testing of the algorithm using the Twin-Burger 2 AUV in a test tank proved that the sensor fusion greatly improved system performance. Majunder et al. describe an algorithm that takes advantage of low level sensor fusion techniques in order to provide a more robust scene description by combining both vision and sonar information (Majumder et al. 2001). Huster et al. propose a system to improve station keeping by using accelerometer and gyrocompass measurements as well as monocular vision displacements to counteract drift from a fixed location. The use of inertial measurements also reduces the amount of visual information required for extraction from the vision system resulting in a more simple and robust solution (Huster et al. 2002).While vision based motion estimation techniques rely on the fusion of altitude measurements from sensors to estimate metric displacement (Cufi et al. 2002), Eustice (Eustice 2005) also takes advantage of other sensor information (attitude) in order to overcome many of the challenging issues involved in visual SLAM based navigation in an unstructured environment.

The authors of this chapter, Horgan et al. propose a real-time navigation system for a UUV that takes advantage of the complementary performance of a sensor suite including a DVL, a compass, a depth sensor and altimeter sensors with a feature based motion estimator using vision (Horgan et al. 2007). The compass and the depth sensors are used to bound the drift of the heading and depth estimations respectively. The altimeter is required in order to translate the feature displacements measured from the images into the metric displacements of the robot. While the robot must rely on DVL navigation above a certain altitude where vision is less effective, DVL measurements can be complemented with higher frequency accurate motion estimates from the vision system when navigating close to the seafloor. When a vehicle comes close to the seabed, DVL can drop out due to minimum blanking range, however at such short ranges vision systems are at their most effective. From the reviewed papers it is apparent that sensor fusion can greatly improve robot navigation accuracy while also decreasing the need for expensive individual sensors. However, there is a relative lack of publications in the area which can be explained by the fact that sensor fusion can be quite difficult to implement due to sensors having different physical properties, data types, update rates and resolutions. MSDF that takes advantage of visual information is an appealing prospect as it has complementary attributes to many commercially available sonar sensors as vision system's performance improves with decreasing range making it a very good candidate for near intervention underwater missions. Very little research has taken place into fusion between inertial and vision measurements but the author believes that vision is a viable solution for aiding INS in a near seabed environment where acoustic positioning may be prone to inaccuracy e.g. in channels, caves or wrecks.

Learn Photoshop Now

Learn Photoshop Now

This first volume will guide you through the basics of Photoshop. Well start at the beginning and slowly be working our way through to the more advanced stuff but dont worry its all aimed at the total newbie.

Get My Free Ebook


Post a comment