acer

32353 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@jm64 When you enter,

plot(Phiinverse(1/2^a), a=1..56);

you are asking Maple to evaluate the procedure `Phiinverse` using the unassigned symbolic name `a` (which as yet has no numeric value). That produces a symbolic expression RootOf(erf(_Z)*2^a+2^a-2)*2^(1/2) which (after quite a while) is what gets plotted. The numerical solving of that may encounter some difficulties (using fsolve, I suspect), as you saw.

What you might do instead is either call plot and pass it the operator form, or use uneval quotes to delay the evaluation of Phiinverse until `plot` is ready to start evaluating it for numeric values of parameter `a`. Eg,

Phiinverse :=  x -> Statistics:-Quantile(Normal(0, 1), x):
Phiinverse2 :=  x -> Statistics:-Quantile(Normal(0, 1), x, numeric):

plot('Phiinverse'(1/2^a), a=1..56);
plot(['Phiinverse'(1/2^a), 'Phiinverse2'(1/2^a)], a=1..56);

plot( a->Phiinverse(1/2^a), 1..56);
plot([a->Phiinverse(1/2^a), a->Phiinverse2(1/2^a)], 1..56);

Notice how the range argument differs above, for those uses.

@jm64 When you enter,

plot(Phiinverse(1/2^a), a=1..56);

you are asking Maple to evaluate the procedure `Phiinverse` using the unassigned symbolic name `a` (which as yet has no numeric value). That produces a symbolic expression RootOf(erf(_Z)*2^a+2^a-2)*2^(1/2) which (after quite a while) is what gets plotted. The numerical solving of that may encounter some difficulties (using fsolve, I suspect), as you saw.

What you might do instead is either call plot and pass it the operator form, or use uneval quotes to delay the evaluation of Phiinverse until `plot` is ready to start evaluating it for numeric values of parameter `a`. Eg,

Phiinverse :=  x -> Statistics:-Quantile(Normal(0, 1), x):
Phiinverse2 :=  x -> Statistics:-Quantile(Normal(0, 1), x, numeric):

plot('Phiinverse'(1/2^a), a=1..56);
plot(['Phiinverse'(1/2^a), 'Phiinverse2'(1/2^a)], a=1..56);

plot( a->Phiinverse(1/2^a), 1..56);
plot([a->Phiinverse(1/2^a), a->Phiinverse2(1/2^a)], 1..56);

Notice how the range argument differs above, for those uses.

@alex_01 Note that what you've shown is a Warning message, not a Error message (an error stops the calculation, while a warning prints a message as well as returns something).

One simple answer is that you can suppress Warning messages with the setting interface(warnlevel).

This particular warning is arising out of a call to a NAG routine. Unfortunately, userinfo messages for Optimization don't show the particular NAG function used, in the way that infolevel[LinearAlgebra]:=1 does, say. (I'll submit a suggestion, as an SCR.) But a little debugging reveals that the Maple Warning is being emitted by Optimization:-External:-E04NFA.

> showstat((Optimization::External)::E04NFA,38..49);

(Optimization::External):-E04NFA := proc(x, istate, probtype, n, nclin, a, bl, bu, cvec, h, QPHESS, prmod)
local uselib, Extcall, lda, ldh, lcvec, clambda, liwork, iwork, ifail, lwork, work, obj, iter, iuser, ruser, ax;
       ...
  38   if ifail = 1 then
  39     if probtype = "LP" then
  40       userinfo(1,'Optimization',`non-unique global minimum found`)
         else
  41       Warnings:-Issue("necessary conditions met but sufficient conditions not satisfied")
         end if
       elif ifail = 2 then
  42     Warnings:-Issue("problem appears to be unbounded")
       elif ifail = 3 then
  43     error "no feasible solution found"
       elif ifail = 4 then
  44     Warnings:-Issue("limiting number of iterations reached")
       elif ifail = 5 then
  45     error "maximal degrees of freedom for expansion of reduced Hessian is too small"
       elif ifail = 6 then
  46     error "invalid input parameter to external routine"
       elif ifail = 7 then
  47     error "invalid problem type"
       elif ifail = ('Undefined') then
  48     error "overflow during computation"
       elif ifail = 0 then
  49     NULL
       else
         ...
       end if;
       ...
end proc

The value of the error-level variable `ifail`, after the external call to NAG function e04nfa, is 1. The precise meaning of this can be looked up in NAG's online documentation (section 6 of this page). This seems to indicate that, when H is symmetric positive semidefinite (as is your case), the returned solution may be optimal but not unique. (I'll submit another SCR, for an extra check or more clarity in this situation).

acer

 

@alex_01 Note that what you've shown is a Warning message, not a Error message (an error stops the calculation, while a warning prints a message as well as returns something).

One simple answer is that you can suppress Warning messages with the setting interface(warnlevel).

This particular warning is arising out of a call to a NAG routine. Unfortunately, userinfo messages for Optimization don't show the particular NAG function used, in the way that infolevel[LinearAlgebra]:=1 does, say. (I'll submit a suggestion, as an SCR.) But a little debugging reveals that the Maple Warning is being emitted by Optimization:-External:-E04NFA.

> showstat((Optimization::External)::E04NFA,38..49);

(Optimization::External):-E04NFA := proc(x, istate, probtype, n, nclin, a, bl, bu, cvec, h, QPHESS, prmod)
local uselib, Extcall, lda, ldh, lcvec, clambda, liwork, iwork, ifail, lwork, work, obj, iter, iuser, ruser, ax;
       ...
  38   if ifail = 1 then
  39     if probtype = "LP" then
  40       userinfo(1,'Optimization',`non-unique global minimum found`)
         else
  41       Warnings:-Issue("necessary conditions met but sufficient conditions not satisfied")
         end if
       elif ifail = 2 then
  42     Warnings:-Issue("problem appears to be unbounded")
       elif ifail = 3 then
  43     error "no feasible solution found"
       elif ifail = 4 then
  44     Warnings:-Issue("limiting number of iterations reached")
       elif ifail = 5 then
  45     error "maximal degrees of freedom for expansion of reduced Hessian is too small"
       elif ifail = 6 then
  46     error "invalid input parameter to external routine"
       elif ifail = 7 then
  47     error "invalid problem type"
       elif ifail = ('Undefined') then
  48     error "overflow during computation"
       elif ifail = 0 then
  49     NULL
       else
         ...
       end if;
       ...
end proc

The value of the error-level variable `ifail`, after the external call to NAG function e04nfa, is 1. The precise meaning of this can be looked up in NAG's online documentation (section 6 of this page). This seems to indicate that, when H is symmetric positive semidefinite (as is your case), the returned solution may be optimal but not unique. (I'll submit another SCR, for an extra check or more clarity in this situation).

acer

 

@alex_01 You've forgotten to adjust the objective by that factor of 2.

@alex_01 You've forgotten to adjust the objective by that factor of 2.

@alex_01 You're most welcome.

If I could belabour the notions: when LP or QP problems are passed to Optimization some internal routines of that package analyse the input, dice it and slice it, and produce something like that very same "Matrix form" before passing out to the compiled external solvers.

So supplying the problem in Matrix form has a first benefit that it allows you to formulate a problem without explicitly generating the symbolic equations and formulas for the constraints, bounds, and objective. A float[8] Matrix H takes up much less space than does W^%T.H.W where W is a Vector of named elements (variables).

A second benefit is that, by supplying the data in Matrix form, the package does not have to analyse the symbolic expressions and convert them to Matrix form before passing them along. That overhead is removed.

And the reason that such forms are used is because the compiled (C language) external solvers can evaluate the objective or constraints at a numeric (multidimensional) point very quickly with only the pure numeric data at hand. No callback into Maple to evaluate any symbolic formula is needed or done, for the LP and QP class of problem. The external solvers already have compiled functions to quickly evaluate the objective or constraints -- because their form is known and hard-coded according to these class of problems.

The remaining end-user concern is to figure out how, and to remember, not to form the symbolic expressions both before and after calling QPSolve. If avoidable. Even the Vector W of symbolic variable names may not be needed.

We've seen that a 600 variable QP problem can be solved pretty quickly, and fits reasonably nicely in memory. How large are your largest QP problems?

One problematic scenario for current Optimization is a very large dimension, sparse, QP problem where in principal the nonzero data could fit in memory but not if it has to be cast as a full `rectangular` storage set of Matrices and Vectors.

@alex_01 You're most welcome.

If I could belabour the notions: when LP or QP problems are passed to Optimization some internal routines of that package analyse the input, dice it and slice it, and produce something like that very same "Matrix form" before passing out to the compiled external solvers.

So supplying the problem in Matrix form has a first benefit that it allows you to formulate a problem without explicitly generating the symbolic equations and formulas for the constraints, bounds, and objective. A float[8] Matrix H takes up much less space than does W^%T.H.W where W is a Vector of named elements (variables).

A second benefit is that, by supplying the data in Matrix form, the package does not have to analyse the symbolic expressions and convert them to Matrix form before passing them along. That overhead is removed.

And the reason that such forms are used is because the compiled (C language) external solvers can evaluate the objective or constraints at a numeric (multidimensional) point very quickly with only the pure numeric data at hand. No callback into Maple to evaluate any symbolic formula is needed or done, for the LP and QP class of problem. The external solvers already have compiled functions to quickly evaluate the objective or constraints -- because their form is known and hard-coded according to these class of problems.

The remaining end-user concern is to figure out how, and to remember, not to form the symbolic expressions both before and after calling QPSolve. If avoidable. Even the Vector W of symbolic variable names may not be needed.

We've seen that a 600 variable QP problem can be solved pretty quickly, and fits reasonably nicely in memory. How large are your largest QP problems?

One problematic scenario for current Optimization is a very large dimension, sparse, QP problem where in principal the nonzero data could fit in memory but not if it has to be cast as a full `rectangular` storage set of Matrices and Vectors.

I've branched off comments that contained (basic, new, or functionality) questions about how to use Maple commands.

Links to branched items are above (just below the end of the original parent Post).

acer

Tell us which part of this, if any, you disagree with or dispute.

You have a quadratic programming (QP) problem with linear constraints.

Your Matrix Cov is symmetric positive definite and so your objective is convex. Your linear constraints bound your objective from below. So you don't need to resort to an expensive global solver. Optimization:-QPSolve should do fine.

QPSolve finds the answer for your example in less than 1/10th sec on a fast machine, about 10 times faster than NLPSolve (which will use evalhf for this example to evaluate objective and constraints, and compiled C for the external solver).

The most efficient way to solve this in current Maple is to use the Matrix-form of input to QPSolve. This means that all data and coefficients get passed as data in float[8] rtables. It means you just supply Cov, and not W^%T.Cov.W as the objective. In your MCone.mw attachment your did not use this more efficient form; instead you used the more commonplace and less efficient algebraic-form.

When using that Matrix-form for the QP solver there is no need to form the objective or the constraints as explicit symbolic formula or (in)equalities. This lets you encapsulate much larger problems, using less memory.

For this kind or problem, DirectSearch's global solver will likely be less efficient that GlobalOptimization (which would use evalhf). And GlobalOptimization in turn would be far less efficient that a compiler local solver like NLPSolve (which would be ok to use here, due to convexity, etc). And NLPSolve in turn would be measurably less efficient that a specialized compiled QP solver (even if using an active-set based algorithm) which would evaluate objective and constraints in compiled C code (possible, because coefficients in hardware rtables are all that's needed!). And for very large problems, or when absolute speed is critical, the Matrix-form of QPSolve is much more efficient than the algebraic-form which uses the explicit symbolic formulae.

An interior-point method for LP problems is not going to give you an interior point method for QP problems. So an implementation of the Affine Scaling algorithm is not going to help with you QP problem (even when cast as a Second Order quadratic Cone Problem, or SOCP).

But why care so much about SOCP anyway? Where is your problem too large for Matrix-form QPSolve, or for which QPSolve is not fast enough? Do you have a concrete portfolio optimization problem for which you can demonstrate that (the most efficient currently possible) Matrix-form QPSolve does not have a good enough performance? Can you upload it?

Suppose that you demonstrate a SOC Problem for which Matrix-form QPSolve is too slow. You can see from the Wikipedia page for SOCP that one of the existing semidefinite solvers (CSDP) is due to an original project containing H. Wolkowitz, a math professor at U Waterloo who is also listed in the reference paper of the newer LP interior-point method of LPSolve. You could submit an SCR for CSDP in future Maple.

Or perhaps you could compile a shared library for CSDP now, and connect to it using wrapperless external calling. (I have not tried this.)

acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

One of the key aspects that makes the Affine Scaling interior point method important is that it can be implemented efficiently.

The above implementation is, unfortunately, quite inefficient. It produces a lot of temporary objects which must be memory-managed. The collectible garbage consists of lots of unnecessary temporary non-scalar objects (eg. Matrices, Vectors and lists). Every line but one in the do-loop involves avoidable creation of one or more Matrix/Vector/list objects, and managing that memory is costly. An efficient implementation at the Maple "Library" level would involved operating inplace on float[8] datatype Matrices and Vectors (or Arrays).

Also, what is posted is not the Affine Scaling algorithm, which handles an arbitrary LPP. It's a hard-coded example for just a single given LPP.

Readers might be interested to know that Maple 15 has an interior point method implementation suitable for larger, sparse LPPs.

acer

First 422 423 424 425 426 427 428 Last Page 424 of 592