Benjamin C Davis and David M Lane

Ocean Systems Laboratory, Heriot-Watt University


1. Introduction

System integration and validation of embedded technologies has always been a challenge, particularly in the case of autonomous underwater vehicles (AUVs). The inaccessibility of the remote environment combined with the cost of field operations have been the main obstacles to the maturity and evolution of underwater technologies. Additionally, the analysis of embedded technologies is hampered by data processing and analysis time lags, due to low bandwidth data communications with the underwater platform. This makes real-world monitoring and testing challenging for the developer/operator as they are unable to react quickly or in real-time to the remote platform stimuli.

This chapter discusses the different testing techniques useful for unmanned underwater vehicle (UUVs) and gives example applications where necessary. Later sections digress into more detail about a new novel framework called the Augmented Reality Framework (ARF) and its applications on improving pre-real-world testing facilities for UUVs. To begin with more background is given on current testing techniques and their uses. To begin with some background is given about Autonomous Underwater Vehicles (AUVs). An AUV (Healey et al., 1995) is a type of UUV. The difference between AUVs and Remotely operated vehicles (ROVs) is that AUVs employ intelligence, such as sensing and automatic decision making, allowing them to perform tasks autonomously, whilst ROVs are controlled remotely by a human with communications running down a tether. AUVs can operate for long periods of time without communication with an operator as they run a predefined mission plan. An operator can design missions for multiple AUVs and monitor their progress in parallel. ROVs require at least one pilot per ROV controlling them continuously. The cost of using AUVs should be drastically reduced compared with ROVs providing the AUV technology is mature enough to execute the task as well as an ROV. AUVs have no tether, or physical connection with surface vessels, and therefore are free to move without restriction around or inside complex structures. AUVs can be smaller and have lower powered thrusters than ROVs because they do not have to drag a tether behind them. Tethers can be thousands of metres in length for deep sea missions and consequently very heavy. In general, AUVs require less infrastructure than ROVs i.e. ROVs usually require a large ship and crew to operate which is not required with an AUV due to being easier to deploy and recover.

In general, autonomous vehicles (Zyda et al., 1990) can go where humans cannot, do not want to, or in more relaxed terms they are suited to doing the "the dull, the dirty, and the dangerous". One of the main driving forces behind AUV development is automating , potentially tedious, tasks which take a long time to do manually and therefore incur large expenses. These can include oceanographic surveys, oil/gas pipeline inspection, cable inspection and clearing of underwater mine fields. These tasks can be monotonous for humans and can also require expensive ROV pilot skills. AUVs are well suited to labour intensive or repetitive tasks, and can perform their jobs faster and with higher accuracy than humans. The ability to venture into hostile or contaminated environments is something which makes AUVs particularly useful and cost efficient.

AUVs highlight a more specific problem. Underwater vehicles are expensive because they have to cope with the incredibly high pressures of the deepest oceans (the pressure increases by 1 atmosphere every 10m). The underwater environment itself is both hazardous and inaccessible which increases the costs of operations due to the necessary safety precautions. Therefore the cost of real-world testing, the later phase of the testing cycle, is particularly expensive in the case of UUVs. Couple this with poor communications with the remote platform (due to slow acoustic methods) and debugging becomes very difficult and time consuming. This incurs huge expenses, or more likely, places large constraints on the amount of real-world testing that can be feasibly done. It is paramount that for environments which are hazardous/inaccessible, such as sea, air and space, that large amounts of unnecessary real-world testing be avoided at all costs. Ideally, mixed reality testing facilities should be available for pre-real-world testing of the platform. However, due to the expense of creating specific virtual reality testing facilities themselves, adequate pre-real-world tests are not always carried out. This leads to failed projects crippled by costs, or worse, a system which is unreliable due to inadequate testing.

Different testing mechanisms can be used to keep real-world testing to a minimum. Hardware-in-the-loop (HIL), Hybrid Simulation (HS) and Pure Simulation (PS) are common pre-real-world testing methods. However, the testing harness created is usually very specific to the platform. This creates a problem when the user requires testing of multiple heterogeneous platforms in heterogeneous environments. Normally this requires many specific test harnesses, but creating them is often time consuming and expensive. Therefore, large amounts of integration tests are left until real-world trials, which is less than ideal. Real world testing is not always feasible due to the high cost involved. It would be beneficial to test the systems in a laboratory first. One method of doing this is via pure simulation (PS) of data for each of the platform's systems. This is not a very realistic scenario as it doesn't test the actual system as a whole and only focuses on individual systems within a vehicle. The problem with PS alone is that system integration errors can go undetected until later stages of development, since this is when different modules will be tested working together. This can lead to problems later in the testing cycle by which time they are harder to detect and more costly to rectify. Therefore, as many tests as possible should to be done in a laboratory. A thorough testing cycle for a remote platform would include HIL, HS and PS testing scenarios. For example, an intuitive testing harness for HIL or HS would include: A 3D Virtual world with customisable geometry and terrain allowing for operator observation; A Sensor simulation suite providing exterioceptive sensor data which mimics the real world data interpreted by higher level systems; and a distributed communication protocol to allow for swapping of real for simulated systems running in different locations. Thorough testing of the remote platform is usually left until later stages of development because creating a test harness for every platform can be complicated and costly. Therefore, when considering a testing harness it is important that it is re-configurable and very generic in order to accommodate all required testing scenarios. The ability to extend the testing harness to use specialised modules is important so that it can be used to test specialized systems. Therefore a dynamic, extendible testing framework is required that allows the user to create modules in order to produce the testing scenario quickly and easily for their intended platform/environment.

2. Methods of testing

Milgrim's Reality-Virtuality continuum (Takemura et al., 1994), shown in Figure 1, depicts the continuum from reality to virtual reality and all the hybrid stages in between. The hybrid stages between real and virtual are known as augmented reality (Behringer et al., 2001) and augmented virtuality. The hybrid reality concepts are built upon by the ideas of Hardware-in-the-loop (HIL) and Hybrid Simulation (HS). Figure 1 shows how the different types of testing conform to the different types of mixed reality in the continuum. There are 4 different testing types:

1. Pure Simulation (PS) (Ridao et al., 2004) - testing of a platform's modules on an individual basis before being integrated onto the platform with other modules.

2. Hardware-in-the-loop (HIL) (Lane et al, 2001) - testing of the real integrated platform is carried out in a laboratory environment. Exterioceptive sensors such as sonar or video, which interact with the intended environment, may have to be simulated to fool the robot into thinking it is in the real world. This is very useful for integration testing as the entire system can be tested as a whole allowing for any system integration errors to be detected in advance of real world trials.

3. Hybrid Simulation (HS) (Ridao et al., 2004; Choi & Yuh, 2001) - testing the platform in its intended environment in conjunction with some simulated sensors driven from a virtual environment. For example, virtual objects can be added to the real world and the exterioceptive sensor data altered so that the robot thinks that something in the sensor dataset is real. This type of system is used if some higher level modules are not yet reliable enough to be trusted to behave as intended using real data. Consequently, fictitious data is used instead, augmented with the real data, and inputted to the higher level systems. Thus, if a mistake is made it doesn't damage the platform. An example of this is discussed in section 4.2.

4. Real world testing - This is the last stage of testing. When all systems are trusted the platform is ready for testing in the intended environment. All implementation errors should have been fixed in the previous stages otherwise this stage is very costly. For this stage to be as useful as possible the system designers and programmers need to have reliable intuitive feedback, in a virtual environment, about what the platform is doing otherwise problems can be very hard to see and diagnose.

ARF provides functionality across all stages of the continuum allowing for virtually any testing scenario to be realised. For this reason it is referred to as a mixed reality framework. In the case of Augmented Reality, simulated data is added to the real world perception of some entity. For example, sonar data on an AUV could be altered so that it contains fictitious objects i.e. objects which are not present in the real world, but which are present in the virtual world. This can be used to test the higher level systems of an AUV such as obstacle detection (See Obstacle detection and avoidance example in Section 4.2). A virtual world is used to generate synthetic sensor data which is then mixed with the real world data. The virtual world has to be kept in precise synchronization with the real world. This is commonly known in Augmented Reality as the registration problem. The accuracy of registration is dependent on the accuracy of the position/navigation systems onboard the platform. Registration is a well known problem with underwater vehicles when trying to match different sensor datasets to one another for visualisation. Accurate registration is paramount for displaying the virtual objects in the correct position in the simulated sensor data.

Fig. 1. Reality Continuum combined with Testing Types.

Fig. 1. Reality Continuum combined with Testing Types.

Augmented Virtuality is the opposite of augmented reality i.e. instead of being from a robot's/person's perspective it is from the virtual world's perspective - the virtual world is augmented with real world data. For example, real data collected by an AUV's sensors is rendered in real time in the virtual world in order to recreate the real world in virtual reality. This can be used for Online Monitoring (OM) and operator training (TR) (Ridao et al., 2004). This allows an AUV/ROV operator to see how the platform is situated in the remote environment, thus increasing situational awareness.

In Hybrid Simulation the platform operates in the real environment in conjunction with some sensors being simulated in real time by a synchronized virtual environment. Similar to Augmented Reality, the virtual environment is kept in synchronization using position data transmitted from the remote platform. Thus simulated sensors are attached to the virtual platform and moved around in synchronization with the real platform. Simulated sensors collect data from the virtual world and transmit the data back to the real systems on the remote platform. The real systems then interpret this data as if it were real. It is important that simulated data is very similar to the real data so that the higher level systems cannot distinguish between the two. In summary, the real platform's perception of the real environment is being augmented with virtual data. Hence HS is inherently Augmented Reality. An example of a real scenario where AR testing procedures are useful is in obstacle detection and avoidance in the underwater environment by an AUV. See Obstacle detection and avoidance example in Section 4.2.

Hardware-in-the-Loop (HIL) is another type of mixed reality testing technique. This type of testing allows the platform to be tested in a laboratory instead of in its intended environment. This is achieved by simulating all required exterioceptive sensors using a virtual environment. Virtual sensor data is then sent to the real platform's systems in order to fool them. In essence this is simply virtual reality for robots. Concurrently, the outputs of higher level systems, which receive the simulated data, can be relayed back and displayed in the virtual environment for operator feedback. This can help show the system developer that the robot is interpreting the simulated sensor data correctly. HIL requires that all sensors and systems that interact directly with the virtual environment are simulated. Vehicle navigation systems are a good example since these use exterioceptive sensors, actuators and motors to determine position. Using simulated sensors means that the developer can specify exactly the data which will be fed into the systems being tested. This is complicated to do reliably in the real environment as there are too many external factors which cannot be easily controlled. Augmenting the virtual environment with feedback data from platform for observation means that HIL can be Augmented Virtuality as well as merely virtual reality for the platform.

Consequently, HIL and HS are both deemed to be Mixed Reality concepts, thus any testing architecture for creating the testing facilities should provide all types of mixed reality capabilities and be inherently distributed in nature.

Was this article helpful?

0 0
Learn Photoshop Now

Learn Photoshop Now

This first volume will guide you through the basics of Photoshop. Well start at the beginning and slowly be working our way through to the more advanced stuff but dont worry its all aimed at the total newbie.

Get My Free Ebook

Post a comment