Path planning

Regardless of the control architecture chosen, some sort of planning is typically necessary to maintain safety and efficiency of the system. Planning can be done both before and during the mission (replanning). By replanning the path, the vehicle is able to account for changes in its internal status and in the environment. The plan typically constitutes an explicit path to follow, but can alternatively be in the form of an artificial potential field used by the vehicle to guide it (Tan et al., 2004). At the very least, the plan may consist of a sequence of behaviour parameter sets for a reactive controller. Replanning will then be done by changing the path, adding or removing repulsors or attractors from the artificial potential field or changing the parameter set sequence. Regardless of the implementation, it is important that the planning mechanism is able to make the vehicle avoid hazardous areas, as well as to achieve the goals of the mission.

The required level of path plan detail is dependent on the control architecture of the vehicle. A purely deliberate architecture needs a fully detailed plan, while a reactive architecture requires no other plan than perhaps a sequence of behaviour parameters. A hybrid architecture will need a plan only for the top, deliberate layer, while the lower reactive layer will take care of small adjustments and anti-collision. Thus, the hybrid architecture yields a simpler planning task than the deliberate architecture.

Planning premises are also set by the performance of the navigation system. If the navigation uncertainty is large, a robust plan that has good clearance from obstacles is necessary. Furthermore, the fine detail of a plan becomes useless if the vehicle does not have sufficient navigation capabilities to follow it. Thus, navigation autonomy is a critical feature for autonomous systems that rely on planning. If a plan can be specified relative to the vehicle's current position, local (relative) navigation accuracy is the important parameter. This is advantageous, as local navigation is more accurate than global.

There is a variety of methods available to generate explicit paths (Lavalle, 2006). One method is to convert the free area of the environment into a graph, as shown in Fig. 13. A graph search algorithm can then be applied to find a favourable path based on some optimization criteria. An example is Dijkstra's algorithm (Dijkstra, 1959) which finds the shortest path between two points. Robustness to navigation uncertainty can be introduced by reducing the size of the free areas through safety margins. If the environment is not known in advance, then a dynamic graph search algorithm can be employed (such as the D* algorithm (Lavalle, 2006)).

Fig. 13. Converting an area with an ellipsoid obstacle into a grapth with 4 vertices.

A path may be required to be feasible, i.e. it must adhere to the dynamic constraints of the vehicle. Such constraints must then be included in the graph search algorithm. One way to do this is to transform the environment into a multidimensional space, where each point is a possible configuration (state) of the vehicle. This space is known as the configuration space, or state space (Lavalle, 2006). If this space is discretized, a graph searching algorithm is straightforward to implement. Creating a configuration space, and transforming obstacles into this space, is not a trivial task and may require a significant amount of computational resources. Furthermore, for complex vehicles the configuration space has many dimensions (one for each degree of freedom), increasing the amount of time needed to perform a successful search. Due to the complexity of the configuration space, dynamic objects are also hard to implement.

It is also possible to use dynamic programming to solve the path finding problem. Methods from optimization theory (Nocedal & Wright, 1999) can then be applied to find a plan that is optimized with respect to some objective function (e.g. by minimizing the amount of energy used). The objective function may include constraints, and the optimization methods are not dependent upon a discretization of the environment. There are, however, several drawbacks with using optimization algorithms as well. The optimization problem is not trivial to formulate, and the algorithms can use much time and computational resources in computing a solution, which still may not be globally optimal. Further, most algorithms require a feasible starting path, which again might be difficult to compute. For underwater vehicles, the necessity of a strictly feasible path should be carefully considered. The environment is typically a large 3D volume, sparsely populated with obstacles and rarely adequately known in advance. In this case safety margins around the path will keep the vehicle safe, even if the vehicle is forced to move outside of the path due to dynamic constraints. This way of avoiding dynamic constraints on the path considerably simplifies path planning.

The planning complexity can also be reduced through implicit path planning which does not create an explicit path for the vehicle to follow, but rather transforms the environment into an artificial potential field (Tan et al., 2004). In this field, obstacles and other forbidden areas serve as repulsors, pushing the vehicle away from them, while goals serves as attractors pulling the vehicle towards them. The vector sum of influences on the vehicle acts as input to the vehicle's low level control system. This method is robust for errors in global navigation, and is quick to compute. However, like all planning algorithms, it is dependent on either a completely known environment or on sensors capable of detecting obstacles. Further, the vehicle can become trapped in local minima if care is not taken to remove this (or detect entrapment). Fig. 14 shows an artificial potential field where the goal is located next to a wall.

Local minimum Wall Goal

Fig. 14. An artificial potential field with a goal position beside a wall. Note the local minimum on the opposite side of the wall.

Local minimum Wall Goal

Fig. 14. An artificial potential field with a goal position beside a wall. Note the local minimum on the opposite side of the wall.

For mission goals other than just transiting to a position, path planning can become considerably more complex. An example is area coverage, in which an area is to be mapped or imaged with a sensor (e.g. a sonar, camera or echo-sounder). If a graph search algorithm is used, the problem resembles the travelling salesperson problem (Bonabeau et al., 1999). Dynamic programming and artificial potential fields also become rather complex for more advanced tasks. In this case, it can be advantageous to make use of assumptions and heuristics in the path planning. For coverage planning, which is an area of much research interest (Bourgeois et al., 1999)(Hert et al., 1996)(Huang, 2001)(Choset, 2001), the simple lawnmower pattern is typically suitable. The coverage planning problem for an obstacle-free environment is then reduced to finding the line spacing, orientation and altitude of the pattern. Obstacles can be dealt with by dividing the environment into obstacle-free cells (Huang, 2001).

If the path-finding algorithm is not too computationally expensive, it is possible and advantageous to replan the path during the mission. The ability to replan the path makes the vehicle able to account for internal and external changes, and removes the demand for an a priori known environment. For example, for coverage planning using a lawnmower pattern, the pattern can be replanned in each turn to optimize track spacing based on actual seafloor coverage, or search direction based on discovered bathymetry and sea currents (Hagen et al, 2007). The plan must also be replanned to account for inputs from other parts of the system, such as when the navigation system requests a GPS fix to be done. Some missions cannot, by definition, have an explicit, planned path for the vehicle. An example is adaptive data collection, where the vehicle may perform additional measurements on objects found in the sensor data or may follow a sensor data gradient (e.g. temperature, salinity or plankton density). In both cases, the resulting path is completely dependent on the sensed environment. Such missions must be goal-oriented. If planning is employed, replanning is necessary each time the gradient estimate is updated or an object is found. The frequency of replanning may place severe limitations on the computational time allowed for the planning algorithm, especially in the case of gradient following. If the limitations are too strict, the planning task must be reduced by giving more responsibility to the reactive layer.

Was this article helpful?

0 0
Learn Photoshop Now

Learn Photoshop Now

This first volume will guide you through the basics of Photoshop. Well start at the beginning and slowly be working our way through to the more advanced stuff but dont worry its all aimed at the total newbie.

Get My Free Ebook


Post a comment