acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thanks, Alec. The minor mystery is now solved (see your Edit.)

But it does puzzle me that evalf/Int can do so much more (symbolic?) work, digging away at investigating the integrand when in expression form. I suppose that passing the integrand as a procedure/operator merely tricks it into considering it as a black box that cannot be poked. It's a little worrisome that one has to use a cheap trick, or know which forced method is appropriate, to disable it. It might be nicer to have an optional parameter that controls it, eg. discont=false (assuming that I'm right and that it is discontinuity checking that makes the difference) or whatever.

acer

Are there any ideas for a more lightweight interface to mapleprimes which might be more convenient on a mobile device? (eg, smartphone, etc)?

I am thinking about the sorts of issues that one encounters when accessing a dense site from a mobile device. There are a few sites (google, yahoo, wikipedia, facebook) where a specialized entry-point can make a world of difference.

Accessing the present (v.1) of mapleprimes isn't that great on a smartphone. I am wondering whether the imminent v.2 mapleprimes might  eventually offer a better experience.

acer

Hi Alec,

On my 64bit Linux Maple 13.01, the operator form I showed and the forced _d01amc method took very similar times. Trying each many times, always in fresh sessions, (plotting from 2..100 say) showed as much timing variation for either alone as between both of them.

I have found that the disable-discont-probing-using-operator trick can avoid timing blowups for some finite ranges too. Using forced methods like _d01amc for semi-infinite ranges, or _d01akc for oscillating integrands, etc, can mean that one has to remember or look up their names and purposes. So I tend to advise trying the operator form first for this trick (if one believes that there is less risk to avoiding discont checks).

I saw plot output for only the range 100..110 using either method. But I think I might know what you mean. Using the Standard GUI, the very small value is not plotted, if there is a much larger value shown. So, for example, plotting f (or PP) from 100 to 200 only shows the plotted red line up to about 118 or so. This seems to be a Standard plotting bug. Similarly if plotting from only 200 to 300, or from only 300 too 400. I wonder if it's a single-precision display issue, or something. It seems to have trouble with the plotted red line varying more than a factor of about 10^8. The problem does not seem to occur in the commandline interface.

acer

Hi Alec,

On my 64bit Linux Maple 13.01, the operator form I showed and the forced _d01amc method took very similar times. Trying each many times, always in fresh sessions, (plotting from 2..100 say) showed as much timing variation for either alone as between both of them.

I have found that the disable-discont-probing-using-operator trick can avoid timing blowups for some finite ranges too. Using forced methods like _d01amc for semi-infinite ranges, or _d01akc for oscillating integrands, etc, can mean that one has to remember or look up their names and purposes. So I tend to advise trying the operator form first for this trick (if one believes that there is less risk to avoiding discont checks).

I saw plot output for only the range 100..110 using either method. But I think I might know what you mean. Using the Standard GUI, the very small value is not plotted, if there is a much larger value shown. So, for example, plotting f (or PP) from 100 to 200 only shows the plotted red line up to about 118 or so. This seems to be a Standard plotting bug. Similarly if plotting from only 200 to 300, or from only 300 too 400. I wonder if it's a single-precision display issue, or something. It seems to have trouble with the plotted red line varying more than a factor of about 10^8. The problem does not seem to occur in the commandline interface.

acer

It was slightly more interesting when the arguments could get passed separately, as opposed to this code where the maple command has to be preformatted. In this code, some user, or other higher-level program must format or type out the maple command. It's really just answering a different (IMO, easier) question.

I guess it depends on who's using it, and for what purpose and in what manner.

For example, this code could be more awkward to work with in a scenario where various commandline arguments were intended to be injected at multiple disjoint locations between distinct, given maple code fragments.

acer

The HDF5 technology suite is not just a storage format. It includes a library which implements an application programming interface (API) with C. And Maple has a C API too (via external_calling or custom wrappers).

So it seems to me that it shouldn't be too hard to get Maple to talk to some parts of the HDF5 library, via maple's external_calling interface. For example, one might have Maple access the HDF5 library's H5TBread_table function.

For other HDF5 structure aspects (groups, b-trees) one natural question is: what do you want it to end up as in Maple? For pure datasets, the matter seems simpler.

acer

Let's forget about evalhf for a moment, and a forced quadrature method, and try to consider the numerical difficulties. One can crank up the working precision of `evalf/int` while keeping the numerical quadrature accuracy requirement loose, supposedly independently of Digits.

> integrand:=proc(t)
>     return subs({_n = n, _t = t}, eval(proc(x)
>     local k;
>         return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
>         (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc));
> end proc:
>
> n:=7:
> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,
>        digits=DD,epsilon=EE)):
>
> # These first 2 plots suggest a smooth monotonic function,
> # which might be expected to cross the x-axis "nicely".
> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=10,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> # What, then, should we make of these?
> Digits,DD,EE:=20,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=100,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

Moreover,

> Digits,DD,EE:=100,100,1.0*10^(-3):
> fsolve(numericIntegral,4.0..5.0): evalf[40](%);
                   4.768430018711191230674619128944015414870

> numericIntegral(%%): evalf[40](%);
                                                            -116
               0.1947217706342926765829506543174745536626 10

> plot(numericIntegral,4.0..5.0); # looks ok
> plot(numericIntegral,4.76..4.80); # looks ok

> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,4.0..5.0); # the mess

So, can the jitter in numericIntegral be put down to not enough working precision alongside too loose an accuracy tolerance during the numeric quadrature?

note: I think I got the same results even with liberal sprinkling of forget(evalf) and forget(`evalf/int`).

acer

Let's forget about evalhf for a moment, and a forced quadrature method, and try to consider the numerical difficulties. One can crank up the working precision of `evalf/int` while keeping the numerical quadrature accuracy requirement loose, supposedly independently of Digits.

> integrand:=proc(t)
>     return subs({_n = n, _t = t}, eval(proc(x)
>     local k;
>         return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
>         (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc));
> end proc:
>
> n:=7:
> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,
>        digits=DD,epsilon=EE)):
>
> # These first 2 plots suggest a smooth monotonic function,
> # which might be expected to cross the x-axis "nicely".
> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=10,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> # What, then, should we make of these?
> Digits,DD,EE:=20,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=100,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

Moreover,

> Digits,DD,EE:=100,100,1.0*10^(-3):
> fsolve(numericIntegral,4.0..5.0): evalf[40](%);
                   4.768430018711191230674619128944015414870

> numericIntegral(%%): evalf[40](%);
                                                            -116
               0.1947217706342926765829506543174745536626 10

> plot(numericIntegral,4.0..5.0); # looks ok
> plot(numericIntegral,4.76..4.80); # looks ok

> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,4.0..5.0); # the mess

So, can the jitter in numericIntegral be put down to not enough working precision alongside too loose an accuracy tolerance during the numeric quadrature?

note: I think I got the same results even with liberal sprinkling of forget(evalf) and forget(`evalf/int`).

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

Obviously a Plot0 component is needed, since my posted code accessed such. I explicitly mentioned the need for a Plot0 component in my original reply.

Also, for me it works by updating the image worked with the code "as is", in Maple 13.01, without any extra DocumentTools:-Do. In fact, that's the whole point: DocumentTool:-Do does not have the ability to refresh incrementally.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

I think that one can make this approach faster without having to reduce Digits or the 'digits' option of evalf/Int. This might be done by ensuring that the result (procedure!) from calling the integrand procedure is evalhf'able. That may be accomplished like so,

integrand:=proc(t)
    return subs({_n = n, _t = t}, eval(proc(x)
    local k;
        return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
        (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
    end proc));
end proc:

One can test that. (Using your original, the following would produce an error from evalhf, about lexical scoping.)

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
                              4.91780284967950010

The reason this can be faster is because some of the compiled, external quadrature routines first try to make callbacks into maple (to evaluate the integrand at a given point) in faster evalhf mode.

With lexical scoping avoided, I got the following sort of performance,

> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,epsilon=1e-5,method =_d01ajc)):

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.064932177
 
                                     0.601
 
> n:=35:
> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;
                                 -1.352617315
 
                                     0.695

Using the original, with evalhf disabled due to the lexical scoping in the proc returned by the integrand routine, I got the following performance, fifteen times slower.

> integrand:=proc(t)
>     return proc(x)
>      local k;
>         return x*(1 - add(binomial(n, k)*(1/10*x - t/10 + 1/2)^k*
>         (1/2 - 1/10*x + t/10)^(n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc;
> end proc:

> n:=45:
> f:=integrand(10):
> evalhf(f(7));
Error, lexical scoping is not supported in evalhf
> evalf(f(7));
                                  4.917802850

> st:=time():RootFinding:-NextZero(numericIntegral,-10);time()-st;

                                 -1.064932177
 
                                     9.065

Bytes-used went up by about 650MB during that last computation, ie. garbage collection during the software evalf fallback-mode callbacks. Now, I was unable to use fsolve with my approach, so used NextZero instead. (I haven't investigated why.) I consider that an acceptable tradeoff.

acer

Thanks. The error comes from the objective returning as an unevaluated Int, I suppose, when the numeric quadrature might fail. A better error message would be something like "nonnumeric value of the objective encountered" or similar.

Now, for an Int expression form due to passing an unquoted call to E (even if there is no `evalf` in that F) the internal Optimization routines will supply the wrapping evalf. But the numeric quadrature might still fail. One could rewrite F, using evalf(Int(...)), to check whether a numeric value is returned. If not, it could be retried with a looser epsilon tolerance. And if that too fails then maybe some fake large value such as infinity might be returned, although that could take away smoothness of the objective which is something most of the NLPSolvers expect. A reduced range for the variables might also help in this respect.

acer

First 460 461 462 463 464 465 466 Last Page 462 of 592