The idea is that we randomly generate a point in the search region. We then use it as a starting point for an iterative process. The iterative process, if successful, will return an approximate feasible point, to within a specified tolerance. We can repeat this procedure until we find a feasible point, or, we can repeat it for a fixed number of times and locate the approximate feasible point, if there are any, with the minimum objective function value. The algorithm here follows the second pattern.
Algorithm 2 (Random search with iterative location)
DO for I=1 to Ntries
1. Randomly generate a point in the search region.
2. Take the point in step 1 as an initial guess and call
a routine that iteratively finds an approximate feasible point.
3. Check if the output in step 2 is an approximate feasible point.
END DO
End Algorithm 2
In the above algorithm, step 2 is the core part. Most of the execution time will be spent on that step. Thus, the efficiency of step 2 will determine the efficiency of the entire algorithm. To do step 2, our routine takes advantage of a generalized Newton method for underdetermined systems. The method is an iterative method with locally quadratic convergence under normal conditions. (For details, see [7].) The iterations are according to the following formula.
where, ,
is the
Jacobian matrix and
is the pseudo inverse
(Moore-Penrose inverse) of
.
Handling inequality and bound constraints are two other important issues, since they also directly affect the efficiency of our routine for step 2. We treat inequality and bound constraints in the next section.