Visionbased data acquisition techniques

In order to determine a mathematical model of an underwater vehicle, which is suitable for control purposes, a great number of sensors can be used to acquire necessary data, e.g. inertial measurement units, positioning systems etc. One of the cheapest and simplest methods for determining the mathematical model parameters is using vision-based techniques to determine the vehicle position. Once the position has been determined, the data can be used to calculate higher order derivatives and thus dynamic model parameters.

3.1 Laboratory apparatus

An interesting vision-based laboratory apparatus used for UV parameter identification was introduced by Ridao et al. (2004). It was based on using a floor pattern at the bottom of the laboratory pool. The apparatus was used with URIS underwater vehicle, which is equipped with a down facing camera - it was placed in a swimming pool with a specifically "coded" floor pattern, see Fig. 5a). Using the image analysis on the frames obtained from the onboard camera, the vehicle position can be uniquely determined.

Global mark

Fig. 5. a) Floor pattern used for URIS UV identification, b) mapping of the swimming pool from a perspective to orthogonal view

Fig. 5. a) Floor pattern used for URIS UV identification, b) mapping of the swimming pool from a perspective to orthogonal view

The pattern consists of black and grey dots on a white surface. Places without dots are surrounded with global marks. Each global mark is unique and can be decoded based on the combination of the black and grey dots marked with P. In addition to that, dots marked with O are used to determine the orientation of the vehicle. After using the decoding algorithm, vehicle's position within the laboratory pool can be determined. This data is then used for determining the dynamic model of the vehicle. For details on the method, the reader is referred to Ridao et al. (2004) at references within. Even though this method is innovative; the downside is the complexity of the algorithm used for determining the position of the vehicle.

Another approach is to use an external camera placed next to the pool. This way the vehicle can be detected within subsequent frames and its model can be determined. In Chen (2008) the method that is used is based on placing a camera in such a way that the perspective view of the pool is obtained. A schematic representation is shown in Fig. 5b) where points A, B, C and D mark the edges of a frame and the coordinate system (with (x,y) points) is view of the pool within the frame. In order to get the orthogonal projection of the pool (such that the coordinate system is orthogonal) a linear transformation has to be performed - points (xi,yi) have to be translated into points (ut, l?;). This operation will distort the frame so the "upper"

Global mark

part of the pool has worse resolution than the "lower" part. In order to obtain satisfactory identification results, the camera should be placed in such a way that the frame segment with the worst resolution provides give good results.

The method that the authors have used is based on placing a webcam directly above the swimming pool like in Fig. 6a, Miskovic et al. (2007a). This way the orthogonalization of the pool view is avoided and the algorithm itself is simpler. It should be mentioned that this method can be used for identification of mathematical models of surface marine vessels and underwater vehicles. In order to ensure easier detection of a vehicle within the camera view, a marker is placed on top of the ROV so that its position and orientation within the camera frame could easily be extracted from the recorded video (Fig. 6b). Since the depth cannot be detected with a camera positioned like this, the identification procedure can be performed only in the horizontal plane considering surge, yaw and sway.

Fig. 6. a) Laboratory setup for marine vehicle model identification and b) a frame from the webcam placed above the pool

3.2 Data acquisition

The scheme of data acquisition system is shown in Fig. 7. The 'Synchronization' block is used to ensure that a frame is recorded and that control signals are sent once every sample time (100 ms). Once the synchronization is achieved, the procedure can be described as follows:

• Acquire an RGB image from the camera and separate it to a red, green and blue component;

• Transfer the image to a binary equivalent where detection of the red color results in a logical 1 (white) and everything else results in a logical 0 (black). The result of this operation is shown in Fig. 8a.

• Find the centroid of the group of white pixels - this is the position of the ROV within the camera frame.

• Find the orientation of the group of white pixels - this is the orientation of the ROV within the camera frame. The result of this analysis is shown in Fig. 8b where the original camera image is augmented with ROV's position (green circle) and orientation (blue line).

• Perform inverse kinematics on the data using, to obtain linear and angular speeds that are required for model identification.

An example of obtained velocities using camera data is shown in Fig. 8c. Raw data from camera are naturally noisy, therefore they should be filtered.

Fig. 7. Video-based data acquisition scheme

Was this article helpful?

0 0
Learn Photoshop Now

Learn Photoshop Now

This first volume will guide you through the basics of Photoshop. Well start at the beginning and slowly be working our way through to the more advanced stuff but dont worry its all aimed at the total newbie.

Get My Free Ebook

Post a comment