One issue that often confuses users is local versus global optimization in Maple. I'm just going to give an overview here and will explore specific optimization topics more deeply in future blog entries. Please note that I'm covering only the numeric optimization methods here. I'll leave discussion of the exact methods in Maple to others more knowledgeable about those areas.

The Optimization package is built into Maple and is available to all users. This is primarily a local optimization package, using mostly routines provided by the Numerical Algorithms Group (NAG). There is just one exception: there is a global branch-and-bound method available for univariate problems with finite bounds and no general constraints, accessible through the NLPSolve command.

"Local optimization" means that the method attempts to find a local minimum, and there is no guarantee that you will get the global minimum for the problem. There are some cases (convex problems like linear programs) where the local minimum found will in fact be the global minimum. However, for most situations, don't expect "the best answer" when using a local optimization algorithm.

Local optimization algorithms generally depend on derivatives of the cost function and constraints to aid in the search. Thus, there are strict requirements on the input. For example, NLPSolve requires that these functions be real-valued and twice continuously differentiable. Occasionally, NLPSolve will work even when the input does not fulfill these requirements, but there is no guarantee it will get the expected answer. I often get questions like the following, "NLPSolve works just fine for Case A which does not satisfy the input requirements, but it fails for Case B which is only slightly different, so is there a bug in the program?" The answer is often, "NLPSolve is working as expected. It could have failed for Case A but did not because you were lucky."

Local optimization also depends on an initial point. Even though we made the initial point optional for convenience, you really should provide one for cases where there are multiple local minima. The minimum that is returned depends on the initial point you provide.

Maplesoft also offers the Global Optimization Toolbox (GOT), which is a separate add-on to the Maple product. If you really do need a global solution, this is the product you should use. If a local solution will suffice or you have a convex problem, then use the Optimization package, as those algorithms are more efficient for that purpose.

The GOT accepts a much wider range of input than the local methods. For the default multi-start adaptive random search algorithm, only continuity of the functions is required. Again, people often ask me why it works "correctly" with discontinuous functions, and I can only reply that I'm glad it worked but I can't guarantee it'll work on a different case.

A misconception with the GOT is that "it always returns the best answer." A more accurate description is "it returns the best answer until the termination criteria are met." Because we're dealing with a numeric and not exact solver, and we're using randomized search, the search for the "best answer" continues until the maximum number of function evaluations has been reached, an arbitrary time limit has been reached, or there's been no change in the cost function in the last N evaluations. The documentation contains a more precise description of the termination criteria and how to adjust the parameters.

Having laid out all the limitations, I should say that, in practice, the GOT has proven to be very robust and effective for a lot of applications. As mentioned, it is frequently successful in cases where it's not theoretically guaranteed to work. It uses as its engine the established LGO software authored by Janos Pinter.


Please Wait...