acer

32822 Reputation

29 Badges

20 years, 133 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Let's forget about evalhf for a moment, and a forced quadrature method, and try to consider the numerical difficulties. One can crank up the working precision of `evalf/int` while keeping the numerical quadrature accuracy requirement loose, supposedly independently of Digits.

> integrand:=proc(t)
>     return subs({_n = n, _t = t}, eval(proc(x)
>     local k;
>         return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
>         (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc));
> end proc:
>
> n:=7:
> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,
>        digits=DD,epsilon=EE)):
>
> # These first 2 plots suggest a smooth monotonic function,
> # which might be expected to cross the x-axis "nicely".
> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=10,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> # What, then, should we make of these?
> Digits,DD,EE:=20,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=100,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

Moreover,

> Digits,DD,EE:=100,100,1.0*10^(-3):
> fsolve(numericIntegral,4.0..5.0): evalf[40](%);
                   4.768430018711191230674619128944015414870

> numericIntegral(%%): evalf[40](%);
                                                            -116
               0.1947217706342926765829506543174745536626 10

> plot(numericIntegral,4.0..5.0); # looks ok
> plot(numericIntegral,4.76..4.80); # looks ok

> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,4.0..5.0); # the mess

So, can the jitter in numericIntegral be put down to not enough working precision alongside too loose an accuracy tolerance during the numeric quadrature?

note: I think I got the same results even with liberal sprinkling of forget(evalf) and forget(`evalf/int`).

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hmm. Shouldn't it instead be this,

> P:=(n,delta)->1-Statistics:-CDF(Statistics:-RandomVariable(
>       NonCentralFRatio(3,3*n,delta)),
>       Statistics:-Quantile(Statistics:-RandomVariable(
>      FRatio(3,3*n)),0.95,numeric),numeric):

> P(1000,0.82);
                                 0.1028025004

It's interesting to see it with delta being a variable argument, though, for the sake of plotting.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Hi Robert,

That Warning about first-order conditions is (somewhat not well-known) bug that crops up when using the operator form calling sequence of NLPSolve. The (Trackered) problem seems to relate to numeric computation of derivatives, ie. gradients of objectives and jacobians of constraints.

I have had some prior success with two types of workaround for this bug. One is to use expression form, rather than operator form. In this case, passing unquoted E(c0, c1, c2) to NLPSolve results in an unevaluated integral, which seems "good enough" to serve as an expression in this particular need.

The other type of workaround I've used before is to manually and explicitly supply procedures for the gradient (and jacobian if necessary for non-simple constraints). For that, I have previously used fdiff, because in such cases I expect that NLPSolve has already tried codegen:-Gradient (A.D.) and failed. I have previously mailed examples of such to Joe R., but I'd like to find time to post them here too.

Of course, I too replaced `int` with `Int` inside the `F` procedure body.

I also specified the variables, in my NLPSolve call. Ie,

> Optimization:-NLPSolve(E(c0, c1, c2),
>        variables=[c0,c1,c2],
>        initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);
Error, (in Optimization:-NLPSolve) complex value encountered

I didn't yet investigate whether this last error is truthful, or whether or not some tiny imaginary component from the numeric integral result might not be just a negligible artefact of floating-point computation, or how to work around such.

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

Shouldn't the call to Probability in proc f instead be to ST:-Probability ?

acer

First 470 471 472 473 474 475 476 Last Page 472 of 601