acer

32722 Reputation

29 Badges

20 years, 86 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Right. Good job. For a few major releases now, non-evalhf'able commands can be executed within an evalhf'd procedure if they are wrapped in an `eval` call.

The (faster) evalhf callback from the openmaple API may be usable, then, with this trick. But the evaluation of the particular piece inside that extra `eval` is not actually interpreted under evalhf. The extra `eval` is behaving like a temporary escape from evalhf back to Maple's regular interpreter.

Exceptions to this behaviour include module member calls, like A:-B(blah), for which evalhf will still complain. One way to get around that is to instead call eval(H(blah)) where H is another procedure which itself calls A:-B.

acer

@icegood As far a I know, the `LerchPhi` command is not evalhf'able.

@icegood As far a I know, the `LerchPhi` command is not evalhf'able.

As an illustration of my points, here is a comparison of two ways to get a 4th derivative as a procedure or operator.

The first way, obtaining operator `Fxxxx`, is quite similar to what you do in your worksheet. The alternative way involves just using the `diff` command, with `unapply` used on the result just once.

The alternative way, obtaining operator `otherFxxxx`, takes about 1000 times less time to produce the derivative operator, and evaluates that numerically at a point about 100 times faster than does the original way's `Fxxxx` operator.

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

F:=unapply(expr,x):

Fx:=CodeTools:-Usage( unapply(evalf(simplify(diff(F(x),x))),x) ):
memory used=13.92MiB, alloc change=11.37MiB, cpu time=249.00ms, real time=244.00ms

Fxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fx(x),x))),x) ):
memory used=62.31MiB, alloc change=29.74MiB, cpu time=780.00ms, real time=771.00ms

Fxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxx(x),x))),x) ):
memory used=0.75GiB, alloc change=47.87MiB, cpu time=11.08s, real time=11.09s

Fxxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxxx(x),x))),x) ):
memory used=7.03GiB, alloc change=363.68MiB, cpu time=4.75m, real time=4.76m

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=328.11KiB, alloc change=0 bytes, cpu time=0ns, real time=4.00ms

CodeTools:-Usage( Fxxxx(2.3) );
memory used=89.73MiB, alloc change=0 bytes, cpu time=1.37s, real time=1.38s

                         -0.8163268164

CodeTools:-Usage( otherFxxxx(2.3) );
memory used=1.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=19.00ms

                         -0.8163268116

The alternative way is so very fast that it could also be used to produce separate operators for each of the 1st, 2nd, 3rd, and 4th derivatives, assign each of those to operators as well. But each subsequent derivative would be produced using `diff` applied to the previous expression rather that function applications, and there would be no dubious `simplify` and `evalf` combined actions going on. Keeping each of the four derivatives, and producing four operators, should only be four times slower than the alternative shown above, not one thousand times slower.

You can mess around with option `numeric` on all your original `unapply` calls. But I believe that the approach is still fundamentally misguided.

Especially unfortunate is using `simplify` on a symbolic expression which contains floating-point coefficients, as this often tends to get the opposite effect and produce a much longer expression rather than a simpler one. But for your example, with all those LerchPhi calls, it might even be just `simplify` alone which is the biggest problem.

Here is generation of distinct operators for all the derivatives from 1st to 4th, but without the evalf@simplify,

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=386.71KiB, alloc change=255.95KiB, cpu time=0ns, real time=8.00ms

st:=time():
F:=unapply(expr,x):
Fx_expr:=diff(F(x),x):
Fx:=unapply(Fx_expr,x): # don't create, if not to be used
Fxx_expr:=diff(Fx(x),x):
Fxx:=unapply(Fxx_expr,x): # don't create, if not to be used
Fxxx_expr:=diff(Fxx(x),x):
Fxxx:=unapply(Fxxx_expr,x): # don't create, if not to be used
Fxxxx_expr:=diff(Fxxx(x),x):
time()-st;
                             0.015

Fxxxx:=CodeTools:-Usage( unapply(Fxxxx_expr,x) ):
memory used=512 bytes, alloc change=0 bytes, cpu time=16.00ms, real time=3.00ms

CodeTools:-Usage( otherFxxxx(2.9) );
memory used=1.57MiB, alloc change=1.25MiB, cpu time=21.00ms, real time=22.00ms

                          -3.567059374

CodeTools:-Usage( Fxxxx(2.9) );
memory used=1.52MiB, alloc change=1.25MiB, cpu time=16.00ms, real time=17.00ms

                          -3.567059374

acer

As an illustration of my points, here is a comparison of two ways to get a 4th derivative as a procedure or operator.

The first way, obtaining operator `Fxxxx`, is quite similar to what you do in your worksheet. The alternative way involves just using the `diff` command, with `unapply` used on the result just once.

The alternative way, obtaining operator `otherFxxxx`, takes about 1000 times less time to produce the derivative operator, and evaluates that numerically at a point about 100 times faster than does the original way's `Fxxxx` operator.

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

F:=unapply(expr,x):

Fx:=CodeTools:-Usage( unapply(evalf(simplify(diff(F(x),x))),x) ):
memory used=13.92MiB, alloc change=11.37MiB, cpu time=249.00ms, real time=244.00ms

Fxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fx(x),x))),x) ):
memory used=62.31MiB, alloc change=29.74MiB, cpu time=780.00ms, real time=771.00ms

Fxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxx(x),x))),x) ):
memory used=0.75GiB, alloc change=47.87MiB, cpu time=11.08s, real time=11.09s

Fxxxx:=CodeTools:-Usage( unapply(evalf(simplify(diff(Fxxx(x),x))),x) ):
memory used=7.03GiB, alloc change=363.68MiB, cpu time=4.75m, real time=4.76m

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=328.11KiB, alloc change=0 bytes, cpu time=0ns, real time=4.00ms

CodeTools:-Usage( Fxxxx(2.3) );
memory used=89.73MiB, alloc change=0 bytes, cpu time=1.37s, real time=1.38s

                         -0.8163268164

CodeTools:-Usage( otherFxxxx(2.3) );
memory used=1.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=19.00ms

                         -0.8163268116

The alternative way is so very fast that it could also be used to produce separate operators for each of the 1st, 2nd, 3rd, and 4th derivatives, assign each of those to operators as well. But each subsequent derivative would be produced using `diff` applied to the previous expression rather that function applications, and there would be no dubious `simplify` and `evalf` combined actions going on. Keeping each of the four derivatives, and producing four operators, should only be four times slower than the alternative shown above, not one thousand times slower.

You can mess around with option `numeric` on all your original `unapply` calls. But I believe that the approach is still fundamentally misguided.

Especially unfortunate is using `simplify` on a symbolic expression which contains floating-point coefficients, as this often tends to get the opposite effect and produce a much longer expression rather than a simpler one. But for your example, with all those LerchPhi calls, it might even be just `simplify` alone which is the biggest problem.

Here is generation of distinct operators for all the derivatives from 1st to 4th, but without the evalf@simplify,

restart:

expr:=1/7*sin(5/7*x+exp(3/7*x))/exp(2/7*(1/7*sin(5/7*(1/11*sin(5/11*x
      +exp(3/11*x))/exp(2/11*((1/11*sin(5/11*x+exp(3/11*x))/exp(2/11*x)))
      +1/7*ln(3/7*x)))))+1/7*ln(3/7*x)):

otherFxxxx:=CodeTools:-Usage( unapply((diff(expr,x,x,x,x)),x) ):
memory used=386.71KiB, alloc change=255.95KiB, cpu time=0ns, real time=8.00ms

st:=time():
F:=unapply(expr,x):
Fx_expr:=diff(F(x),x):
Fx:=unapply(Fx_expr,x): # don't create, if not to be used
Fxx_expr:=diff(Fx(x),x):
Fxx:=unapply(Fxx_expr,x): # don't create, if not to be used
Fxxx_expr:=diff(Fxx(x),x):
Fxxx:=unapply(Fxxx_expr,x): # don't create, if not to be used
Fxxxx_expr:=diff(Fxxx(x),x):
time()-st;
                             0.015

Fxxxx:=CodeTools:-Usage( unapply(Fxxxx_expr,x) ):
memory used=512 bytes, alloc change=0 bytes, cpu time=16.00ms, real time=3.00ms

CodeTools:-Usage( otherFxxxx(2.9) );
memory used=1.57MiB, alloc change=1.25MiB, cpu time=21.00ms, real time=22.00ms

                          -3.567059374

CodeTools:-Usage( Fxxxx(2.9) );
memory used=1.52MiB, alloc change=1.25MiB, cpu time=16.00ms, real time=17.00ms

                          -3.567059374

acer

@Samir Khan Thanks!

(Grist for the efficiency mill.)

Please forgive me if I've missed it in some hidden code block or region in that uploaded worksheet, but I don't see any code to reproduce these images in Maple.

I don't see how anyone could properly assess whether Maple can produce such plots or images using a reasonable amount of time and memory resources, without source code or a worksheet which successfully reproduces the results.

acer

This is very good news for Maple.

The first few areas I would suggest are,

  • Advanced plotting: The toolset is quite powerful, but since there will always be many custom needs there will never be a canned solution for every task and problem. Showing how the plotting facilities can do wonders, when combined with even a little programming (not a dirty word), would be a great addition.

  • Applied DEs: Applied problems with symbolic DE solutions, or at least some measure of mathematically driven symbolic analysis. (One topic that comes to mind is Control. Another is Delay DEs.)

acer

@epostma Did you intend to include the option 'output'='residualsumofsquares' in the `Fit` call in procedure `minsumsq`?

Like this. say,

minsumsq := c -> Statistics:-ExponentialFit(ln~([3.05, 3.1, 3.75] -~ c),
                                            [.74e-4, .1806e-3, .584e-4],
                                            output=residualsumofsquares):

plot(minsumsq,-10..3.04);

Just because a fitting problem does not have finite optimal parameter values does not mean that there is no limiting curve for the fit that it continuous between the original independent data points. For example, the constant function x -> 0.00009207055709 may approximately "fit the bill" in such a limiting sense.

All pretty moot, anyway, for 3 data points and only conjecture as to what the poster wanted.

ps. If the error weighting were not by exponential fitting then there may be finite optimal parameter values here.

@epostma Did you intend to include the option 'output'='residualsumofsquares' in the `Fit` call in procedure `minsumsq`?

Like this. say,

minsumsq := c -> Statistics:-ExponentialFit(ln~([3.05, 3.1, 3.75] -~ c),
                                            [.74e-4, .1806e-3, .584e-4],
                                            output=residualsumofsquares):

plot(minsumsq,-10..3.04);

Just because a fitting problem does not have finite optimal parameter values does not mean that there is no limiting curve for the fit that it continuous between the original independent data points. For example, the constant function x -> 0.00009207055709 may approximately "fit the bill" in such a limiting sense.

All pretty moot, anyway, for 3 data points and only conjecture as to what the poster wanted.

ps. If the error weighting were not by exponential fitting then there may be finite optimal parameter values here.

I suspect that what's Axel is driving at is,

expr:=R*D^2 / ( R*D^2 + s*L + s^2*R*L*C );

                                           2         
                                        R D          
                        expr := ---------------------
                                   2          2      
                                R D  + s L + s  R L C


1/expand(1/expr);

                                      1        
                              -----------------
                                          2    
                                  s L    s  L C
                              1 + ---- + ------
                                     2      2  
                                  R D      D   

acer

I suspect that what's Axel is driving at is,

expr:=R*D^2 / ( R*D^2 + s*L + s^2*R*L*C );

                                           2         
                                        R D          
                        expr := ---------------------
                                   2          2      
                                R D  + s L + s  R L C


1/expand(1/expr);

                                      1        
                              -----------------
                                          2    
                                  s L    s  L C
                              1 + ---- + ------
                                     2      2  
                                  R D      D   

acer

Which operating system are you running?

When this problem occurs, is it always the case that there is a worksheet which has been already opened with Maple 15 and minimized to the tray? Is so, then can you maximize that? If you cannot maximize that with the mouse-pointer, and if you are on MS-Windows, then can you maximize if using the <Alt>-hold and <tab> key press?

acer

@PatrickT Please do follow the link in the Comment by pagan. It shares themes with that fast-complex-argument-images Post -- the fastest way to assemble high quality, high density images of mathematical functions in Maple is to act inplace upon hardware datatype rtable with the Compiler.

Other techniques for computing fractal images used in some Application Center uploads (`option hfloat`, or Threads or Grid -- often submitted to use some method that was in vogue) are alone not nearly as fast or efficient.

It may be possible to combine Threads/Task and compiled routines. But since it's not obvious that the Compiler's external-call runtime is thread-safe, it might be necessary to use a Poor Man's Compiler [here & here]. Different techniques are needed for two cases: when the user has already pre-Compiled the mathematical function which must then be used to populate the rtable (eg. for regular plot creation), and when the function and rtable-populating are combined into one single procedure (eg. fractal image creation).

I'd like to finish and submit a posting on the first of those two: fast generation of high density plots/images using a pre-Compiled procedure.

One day this may all be automatic... and people reading these esoteric methods may feel a strange sensation similar to what we can now get re-reading old books on specialized non-digital exposed-film-camera techiques.

None of this relates directly to your projection problem. I asked about curves-vs-surfaces because I was immediately worried about the overlay. I haven't thought of an effective solution to that issue.

acer

@PatrickT Please do follow the link in the Comment by pagan. It shares themes with that fast-complex-argument-images Post -- the fastest way to assemble high quality, high density images of mathematical functions in Maple is to act inplace upon hardware datatype rtable with the Compiler.

Other techniques for computing fractal images used in some Application Center uploads (`option hfloat`, or Threads or Grid -- often submitted to use some method that was in vogue) are alone not nearly as fast or efficient.

It may be possible to combine Threads/Task and compiled routines. But since it's not obvious that the Compiler's external-call runtime is thread-safe, it might be necessary to use a Poor Man's Compiler [here & here]. Different techniques are needed for two cases: when the user has already pre-Compiled the mathematical function which must then be used to populate the rtable (eg. for regular plot creation), and when the function and rtable-populating are combined into one single procedure (eg. fractal image creation).

I'd like to finish and submit a posting on the first of those two: fast generation of high density plots/images using a pre-Compiled procedure.

One day this may all be automatic... and people reading these esoteric methods may feel a strange sensation similar to what we can now get re-reading old books on specialized non-digital exposed-film-camera techiques.

None of this relates directly to your projection problem. I asked about curves-vs-surfaces because I was immediately worried about the overlay. I haven't thought of an effective solution to that issue.

acer

First 432 433 434 435 436 437 438 Last Page 434 of 599