acer

32343 Reputation

29 Badges

19 years, 327 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

> ee := Int((2.50*10^6+27.*f^2)*f^(57/50)*(21.
>   +221.*M^(2/3)*f^(2/3))/(2.27*10^18+2.45*10^13*f^2
>   -1.16*10^14*f^(107/50)+5.43*10^10*f^(207/50)
>   -1.20*10^6*f^(307/50)+13.*f^(407/50)), f = 10. .. 2670.):

> evalf(IntegrationTools:-Expand(ee));
                                 -6                     (2/3)
                  0.2305099822 10   + 0.00005166625109 M

> # Since you changed the range to floats, value() also works
> value(IntegrationTools:-Expand(ee));
                                 -6                     (2/3)
                  0.2305099822 10   + 0.00005166625109 M

acer

> ee := Int((2.50*10^6+27.*f^2)*f^(57/50)*(21.
>   +221.*M^(2/3)*f^(2/3))/(2.27*10^18+2.45*10^13*f^2
>   -1.16*10^14*f^(107/50)+5.43*10^10*f^(207/50)
>   -1.20*10^6*f^(307/50)+13.*f^(407/50)), f = 10. .. 2670.):

> evalf(IntegrationTools:-Expand(ee));
                                 -6                     (2/3)
                  0.2305099822 10   + 0.00005166625109 M

> # Since you changed the range to floats, value() also works
> value(IntegrationTools:-Expand(ee));
                                 -6                     (2/3)
                  0.2305099822 10   + 0.00005166625109 M

acer

It can be edited to work in such cases (although sometimes it's tricky).

> gdiff:=(f,x)->thaw(value(subs([x=freeze(x),x^(-1)=1/freeze(x)],Diff(f,x)))):

> gdiff( 5*x^2+5/x^2+y, x^2 );
                                        5
                                   5 - ----
                                         4
                                        x

I'll leave differentiating 5/x^4+y w.r.t x^2 to your imagination. It wouldn't be very neat if using that mentioned technique of converting to atomic identifiers with the mouse.

acer

It can be edited to work in such cases (although sometimes it's tricky).

> gdiff:=(f,x)->thaw(value(subs([x=freeze(x),x^(-1)=1/freeze(x)],Diff(f,x)))):

> gdiff( 5*x^2+5/x^2+y, x^2 );
                                        5
                                   5 - ----
                                         4
                                        x

I'll leave differentiating 5/x^4+y w.r.t x^2 to your imagination. It wouldn't be very neat if using that mentioned technique of converting to atomic identifiers with the mouse.

acer

I've enjoyed the posts too.

I suspect that multithreading could become one of the Great Ways to improve Maple performance, both for "users' code" and for Maple's own Library.

Even with speedup that is linear (as a function of cores) it is appropriate to start now on what is clearly very difficult underpinning development, even if the average number of cores does not reach the hundreds for some years.

nb. Other Great Ways include use of the Compiler (another technology that deserves enhancement), producing less garbage, and reducing computational complexity.

acer

Underscores also have a problem at present. ?spec_eval_rules does not get here.

acer

It likely would not come up, but in general one would need to be more careful if p could be a hardware scalar float.

> f := proc() option hfloat;
> local p;
>   p:=evalf((1+sqrt(5))/2-1,20);
>   p := 2.0 * p;
>   print(p);
>   op(1,p), # oops
>   cat(op(StringTools:-Split(convert(p,string),".")));
> end proc:
>
> f();
                               1.23606797749979

                     123606797749978981, "123606797749979"

> op(1, HFloat(1.23606797749979) );
                              123606797749979003

acer

It likely would not come up, but in general one would need to be more careful if p could be a hardware scalar float.

> f := proc() option hfloat;
> local p;
>   p:=evalf((1+sqrt(5))/2-1,20);
>   p := 2.0 * p;
>   print(p);
>   op(1,p), # oops
>   cat(op(StringTools:-Split(convert(p,string),".")));
> end proc:
>
> f();
                               1.23606797749979

                     123606797749978981, "123606797749979"

> op(1, HFloat(1.23606797749979) );
                              123606797749979003

acer

Rather than call a Maple operator on every character (using Remove), it might be better to simply Split the string at the character to be removed, and then concatenate afterwards.

(**) cat(op(StringTools:-Split("1.6180339887498948482",".")));
                            "16180339887498948482"

acer

Rather than call a Maple operator on every character (using Remove), it might be better to simply Split the string at the character to be removed, and then concatenate afterwards.

(**) cat(op(StringTools:-Split("1.6180339887498948482",".")));
                            "16180339887498948482"

acer

I think that it would be useful to delineate the current status and restrictions of Theads in Maple.

For example,

  • What makes a Maple code routine Thread-unsafe? (There are lots of ways to cause global effects, so a clear list would help. This earlier post doesn't make it clear, sorry.)
  • What low-level Library routines are Thread-unsafe (eg. limit, is, signum, etc). These routines might not be usable inside one's own Threaded code.
  • What blocks all Threads? (eg. garbage collection? object uniquification in the kernel? Other?) These would affect performance, and create bottlenecks possibly unavoidable at present.
  • What actions can occur in only one Thread at a time? (eg. external calling to Maple's own auxiliary external compiled libraries? external calling to a Compiler:-Compile'd routine?) I mean what actions block other Threads from doing the same thing, not what is Thread-safe.
  • Can two Threads write to two different embedded components, with interleaved timing? Is Typesetting Thread-safe, for that?
  • The debugger status was mentioned in another post.

The question of Thread-safety of the Maple Library is very tricky. There may be a characterization of what is currently, typically Thread-safe, for example along the lines of a symbolic vs numeric distinction. People are going to want to write Threaded code which is not simply plain arithmetic and use of kernel built-ins. But a lot of symbolic Library code is not Thread-safe, and a lot of numeric Library code might call externally (and block duplicated external lib access). Characterizing what sort of code be successfully Threaded would be very useful. Revising such a description with each released improvement would also help.

acer

I was thinking of trying that, but haven't found the spare time yet.

I also haven't read through that linked .pdf yet. But (after helping out the Optimization call to handle the proc a little) I saw that the supplied data was generating some complex (nonreal) values (in the `ln` call`). That made Optimization balk. So I haven't figured out whether the implemenatation of the algorithm is faulty, or how to handle such occasional complex returns. I notice that the proc does produce a plot. It's inconvenient to fiddle with it while it is so slow -- another reason I wanted to optimize/Compile the implementation.

It came up in another thread a while back that Compiling/evalhf'ing a proc that used Statistics (Mean, Variance, or samples) didn't work. A workaround was to replace such Statistics calls with explicit (finite) formulae or even with `add` calls.

acer

I was thinking of trying that, but haven't found the spare time yet.

I also haven't read through that linked .pdf yet. But (after helping out the Optimization call to handle the proc a little) I saw that the supplied data was generating some complex (nonreal) values (in the `ln` call`). That made Optimization balk. So I haven't figured out whether the implemenatation of the algorithm is faulty, or how to handle such occasional complex returns. I notice that the proc does produce a plot. It's inconvenient to fiddle with it while it is so slow -- another reason I wanted to optimize/Compile the implementation.

It came up in another thread a while back that Compiling/evalhf'ing a proc that used Statistics (Mean, Variance, or samples) didn't work. A workaround was to replace such Statistics calls with explicit (finite) formulae or even with `add` calls.

acer

Your second (1-line) way is a procedure call. The first way could also be made into a proc, and also be 1 line.

Your second way requires calling dsolve/numeric each time. That involves repeating all sorts of setup, initialization, deduction of problem class, algorithmic switching due to problem class, etc, etc. That is all unnecessary when one knows that the problem form will be the same, and that only some parameter will change. One purpose of the parameter option is to remove duplicated, unnecessary overhead.

The overhead costs both time and memory.

    |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=(a,c)->dsolve({diff(y(x),x)+sin(x^2)*y(x)=a,y(0)=c},y(x),numeric):
> h:=rand(100):
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   ans(h(),h())(5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                5.424, 90161024
  
> quit
 
 
    |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=dsolve({diff(y(x),x)+sin(x^2)*y(x\
> )=a,y(0)=c},y(x),numeric,parameters=[a,c]):
> h:=rand(100):
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   ans(parameters=[h(),h()]);
>   ans(5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                1.613, 38003920

   |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=dsolve({diff(y(x),x)+sin(x^2)*
> y(x)=a,y(0)=c},y(x),numeric,parameters=[a,c]):
> h:=rand(100):
> nans:=proc(a,b,x) ans(parameters=[a,b]); ans(x); end proc:
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   nans(h(),h(),5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                1.632, 38003920

There may also be some technical reasons underneath. For example, using globals is not thread-safe.

acer

Hopefully, this is of some use. There may be easier ways.

> interface(warnlevel=0):
> with(DETools): with(plots): with(plottools):
> Diam := .4: m := 100: g := 9.8: c1 := 0.155e-3*Diam:
> c2 := .22*Diam^2: ynot := 50000*.3048: Wx := 60*.44704: Wy := 0:
> Vwx := diff(x(t), t)+Wx:
> Vwy := diff(y(t), t)+Wy:
> theta := arctan(Vwy/Vwx):
> Vwmag := sqrt(Vwx^2+Vwy^2):
> Fdragmag := -c1*Vwmag-c2*Vwmag^2:
> Fdragx := Fdragmag*cos(theta):
> Fdragy := Fdragmag*sin(theta):
> yeq := m*(diff(y(t), t, t)) = -m*g+Fdragy:
> xeq := m*(diff(x(t), t, t)) = Fdragx:
> ics := (D(x))(0) = ground, (D(y))(0) = 0, y(0) = 50000*.3048, x(0) = 0:
> ohthehumanity := dsolve({ics, xeq, yeq}, numeric, [y(t), x(t)],
>                         output = listprocedure):
> hite := rhs(ohthehumanity[2]):
> pos := rhs(ohthehumanity[4]):
>
> Z:=proc(GG,TT) global ground;
>   if not(type(GG,numeric)) then
>     return 'procname'(args);
>   else
>     ground:=GG;
>   end if;
>   pos(TT)^2 + hite(TT)^2;
> end proc:
>
> Zgrad := proc(V::Vector,W::Vector)
>   W[1] := fdiff( Z, 1, [V[1],V[2]] );
>   W[2] := fdiff( Z, 2, [V[1],V[2]] );
>   NULL;
> end proc:
>
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
                                             [107.518346261289281]
           sol := [0.0000301543312111207006, [                   ]]
                                             [104.668066005965386]
 
>
> pos(sol[2][2]), hite(sol[2][2]);
                0.00454084653245279135, 0.00308788665268533435
 
> ground,sol[2][2];
                   107.518346261289281, 104.668066005965386
 
>
> Digits:=30:
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
sol := [
 
                                       -13  [107.518073058989098792559364875]
    0.898257246759777661510402862893 10   , [                               ]]
                                            [104.668080856614961089921927934]
 
>                                                                                 
> pos(sol[2][2]), hite(sol[2][2]);
                                    -6                          -6
            -0.405056976848783279 10  , -0.339596677889630882 10
 

EDIT: the objective gradient routine `Zgrad` is not necessary when using the nonlinearsimplex method (which doesn't use derivatives, if I recall). I created it only because it helps when using some other 'method's of Minimize. I had to try various methods before I found one that got the result nicely. Minimize's ProcedureForm or MatrixForm calling sequences sometimes run into trouble unless an objective gradient (and/or constraintjacobian) routine is provided explicitly for "black-box" objective (and/or constraint) procedures which cannot be automatically handled by codegen[GRADIENT] (and/or codegen[JACOBIAN]. The same result returns from simply,

Optimization:-Minimize(Z,
  'method'='nonlinearsimplex' ,initialpoint=[100,100]);

The nature of routine Z is to find a matching pair 'ground' and time values for which both `hite` and `pos` (Y and X) are simultaneously zero.

acer

First 475 476 477 478 479 480 481 Last Page 477 of 592