acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

It seems plausible that there is some (other) example for which the "I" terms are not yet collected in the simplify@evalc result, while the direct solve result is not in the desired form.

Anyway, this is all verysimple tweaking, so arguing about it is mere quibbling. And beauty is in the eye of the beholder. So, yet another quibble: someone else might prefer the result of,

map((normal@Re)+(normal@Im)*I,simplify(evalc([sol])));

acer

It seems plausible that there is some (other) example for which the "I" terms are not yet collected in the simplify@evalc result, while the direct solve result is not in the desired form.

Anyway, this is all verysimple tweaking, so arguing about it is mere quibbling. And beauty is in the eye of the beholder. So, yet another quibble: someone else might prefer the result of,

map((normal@Re)+(normal@Im)*I,simplify(evalc([sol])));

acer

While the random 10x10 Matrix might likely have a full set of linearly independent eigenvectors, it could of course just be that the OP used RandomMatrix for illustration. It may be that for the OP's typical problems the chance of the input being a defective Matrix is more likely. (We don't yet know.)

Such decomposition approaches to computing the floating-point exponential generally fail for defective Matrices. See Method 14 of section 6 of the well-known paper by Moler and Van Loan. I doubt that this method gets "better" by applying it to the mixed float-symbolic problem, which is what we have here for an unassigned symbol `t`. (For a float Matrix, one might be able to compute the condition number quickly, as an initial test.)

Naturally, I fully expect that Robert is aware of all this. I only mention it for the possible benefit of others.

acer

While the random 10x10 Matrix might likely have a full set of linearly independent eigenvectors, it could of course just be that the OP used RandomMatrix for illustration. It may be that for the OP's typical problems the chance of the input being a defective Matrix is more likely. (We don't yet know.)

Such decomposition approaches to computing the floating-point exponential generally fail for defective Matrices. See Method 14 of section 6 of the well-known paper by Moler and Van Loan. I doubt that this method gets "better" by applying it to the mixed float-symbolic problem, which is what we have here for an unassigned symbol `t`. (For a float Matrix, one might be able to compute the condition number quickly, as an initial test.)

Naturally, I fully expect that Robert is aware of all this. I only mention it for the possible benefit of others.

acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

First 461 462 463 464 465 466 467 Last Page 463 of 592