acer

32298 Reputation

29 Badges

19 years, 301 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are Posts that have been published by acer

A few weeks ago I mentioned the ncrunch comparison of "mathematical programs for data analysis" in a comment in another thread.  There is now a new, 5th release of that review. The systems reviewed are:

  • GAUSS
  • Maple
  • Mathematica
  • Matlab
  • O-Matrix
  • Ox
  • SciLab

The review is skewed towards statistical computation and data manipulation, but it includes several interesting comparisons of the major computer algebra systems (CAS).

There is a comparative performance section, and the worksheets used for that benchmarking are available for download. Here is the Maple worksheet, which was used with Maple 11.

There have been a few posts on mapleprimes about numerically solving systems of procedures. The latest one, up until now, was this.

Here's some code to implement the method. Since the algorithm is basically very simple, I've added a few bells and whistles as optional arguments.

The essence of it is as follows. The number of procedures must match the number of parameters of each and every procedure. It does maxtries attempts at choosing a random point, and then does at most maxiter iterations. A solution is only accepted if the norm of the last change in vector (point) x is less than xtol, and if the forward error norm(F(x)) is less than ftol. The jacobian of F may be supplied optionally as a Matrix of procedures, or a method for computing the jacobian may be supplied. The methods are fdiff which only uses Maple's numerical differentiation routine fdiff, or hybrid which attempts symbolic differentiation via Maple's D[] operator and then falls back to fdiff via the nifty evalf@D equivalence.

I have not profiled it, though I have attempted to set up the jacobian construction so that it re-uses the same rtable for each instantiation. I have not gone really low-level with the external-calling to numeric solvers. I have not tried to leverage fast hardware double-precision solutions as jumping-off points for extended precision polishing (eg. see here).

I've added some examples, after the code below. Feel free to comment or mention bugs. I'd really like a better name for it.

 

 

restart:

NRprocs:=proc(

  funlist::{list(procedure),Vector(procedure)}

, {allowcomplex::truefalse:=false}

, {initialpoint::Vector:=NoUserValue}

, {maxtries::posint:=NoUserValue}

, {maxiter::posint:=30}

, {xtol::realcons:=5.0*10^(-Digits)}

, {ftol::realcons:=1.0*10^(-trunc(2*Digits/3))}

, {method::{identical(hybrid,fdiff)}:=':-hybrid'}

, {jacobian::{Matrix({operator,procedure}),identical(NoUserValue)}:=':-NoUserValue'}

, $

)

local a, b, currXseq, dummy, dummyseq, F, i, ii, J, j, jj,

      MaxTries, numvars, oldX, thisJ, thistry, varnumset, X;

 

  varnumset := {seq(`if`(not(member('call_external',{op(3,eval(F))})),
                         nops([op(1,eval(F))]),NULL), F in funlist)};

  if nops(varnumset) > 1 then

    error "expecting procedure all taking same number of arguments";

  elif nops(varnumset) = 1 then

    numvars := varnumset[1];

  else

    numvars := nops(funlist);

  end if;

 

  if numvars <> nops(funlist) then

    error "number of procedures %1 does not match number of their parameters %2"

, nops(funlist), numvars;

  end if;

 

  if initialpoint <> 'NoUserValue' then

    oldX := Vector(numvars,(i)->evalf(initialpoint[i]),':-storage'=':-rectangular',

                   ':-datatype'=`if`(allowcomplex,'complex'(':-float'),':-float'));

    if maxtries = ':-NoUserValue' then

      MaxTries := 1;

    elif maxtries > 1 then

      error "maxtries must be 1 when initialpoint is supplied";

    else

      MaxTries := maxtries;

    end if;

  else

    if maxtries = ':-NoUserValue' then

      MaxTries := 20;

    else

      MaxTries := maxtries;

    end if;

  end if;

 

  if jacobian = ':-NoUserValue' then

    # This is only built once, so there should be a way to make

    # it take a little more time and make a J that evaluates quicker.

    # Using evalhf is just one possibility.

    dummyseq:=seq(dummy[i],i=1..numvars);

    if method=':-hybrid' then

      J:=Matrix(nops(funlist),numvars,

              (i,j)->unapply('evalf'(D[j](funlist[i])(dummyseq)),[dummyseq]));

    else # method=fdiff

      J:=Matrix(nops(funlist),numvars,

              (i,j)->unapply(fdiff(funlist[i],[j],[dummyseq]),[dummyseq]));

    end if;

  else

    J:=jacobian;

  end if;

  thisJ := Matrix(nops(funlist),numvars,

                  ':-datatype'=`if`(allowcomplex,'complex'(':-float'),':-float'));

                                                                                

  X := Vector(numvars,':-datatype'=`if`(allowcomplex,'complex'(':-float'),':-float'));

 

  userinfo(1,'NRprocs',`maximal number of tries is `, MaxTries);

  for thistry from 1 to MaxTries do

 

    if thistry > 1 or initialpoint = ':-NoUserValue' then

      if allowcomplex = true then

        oldX := LinearAlgebra:-RandomVector(numvars,

                              ':-density'=1,':-generator'=-1.0-1.0*I..1.0+1.0*I,

                              ':-outputoptions'=[':-datatype'='complex'(':-float')]);

      else

        oldX := LinearAlgebra:-RandomVector(numvars,

                              ':-density'=1,':-generator'=-1.0..1.0,

                              ':-outputoptions'=[':-datatype'=':-float']);

      end if;

    end if;

 

    userinfo(1,'NRprocs',`initial point : `, convert(oldX,list));

    userinfo(1,'NRprocs',`maximal number of iterations is `, maxiter);

    try

      for ii from 1 to maxiter do

        currXseq:=seq(oldX[jj],jj=1..numvars);

        for i from 1 to nops(funlist) do

          for j from 1 to numvars do

            thisJ[i,j] := J[i,j](currXseq);

          end do;

        end do;

        # copy oldX into X, so that increment can be added to X inplace.

        ArrayTools:-Copy(numvars,oldX,0,1,X,0,1);

        LinearAlgebra:-LA_Main:-VectorAdd(

               X, LinearAlgebra:-LA_Main:-LinearSolve(

                   thisJ,

                   Vector(nops(funlist),

                     (i)->evalf(funlist[i](currXseq))),

                   ':-method'=':-none',':-free'=':-NoUserValue',':-conjugate'=true,

                   ':-inplace'=false,':-outputoptions'=[],':-methodoptions'=[]),

               1,-1,':-inplace'=true,':-outputoptions'=[]);

        userinfo(2,'NRprocs',`at iteration`,ii, convert(X,list));

        if LinearAlgebra:-LA_Main:-Norm(

             # oldX is no longer needed, so can overwrite it inplace.

             LinearAlgebra:-LA_Main:-VectorAdd(

               oldX,X,-1,1,':-inplace'=true,':-outputoptions'=[]),

             infinity,':-conjugate'=true) < xtol

         and max(seq(abs(F(seq(X[jj],jj=1..numvars))),F in funlist)) < ftol then

          return X;

        end if;

        ArrayTools:-Copy(numvars,X,0,1,oldX,0,1);

      end do:

    catch "singular matrix","inconsistent system","unable to store":

      ii := ii - 1;

      next;

    end try;

  end do;

  return 'procname'(args);

end proc:

f:=proc(x,y) x^3-sin(y); end proc:

g:=proc(x,y)

local sol;

  sol:=fsolve(q^5+x*q^2+y*q=0,q,complex):

  Re(sol[1]+1);

end proc:

                                                                                

fl:=[f,g]:

NRprocs(fl);

Vector[column]([[-.897503424482696], [3.94965547519010]])

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

HFloat(8.881784197001252e-16)

 

Digits:=32:

NRprocs(fl);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

Digits:=10:

Vector[column]([[-.89750342448269837150872333768519], [3.9496554751901143338368997374538]])

0.1e-31

 

infolevel[NRprocs]:=1:

sol := NRprocs([f,g],initialpoint=<5.0, 6.0>);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

infolevel[NRprocs]:=0:

NRprocs: maximal number of tries is  1
NRprocs: initial point :  [HFloat(5.0) HFloat(6.0)]
NRprocs: maximal number of iterations is  30

sol := Vector(2, {(1) = -.8975034244826993, (2) = 3.9496554751901174}, datatype = float[8])

HFloat(4.440892098500626e-16)

 

infolevel[NRprocs]:=2:

sol := NRprocs(fl,initialpoint=<-1, -2>);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

infolevel[NRprocs]:=0:

NRprocs: maximal number of tries is  1

NRprocs: initial point :  [HFloat(-1.0) HFloat(-2.0)]
NRprocs: maximal number of iterations is  30
NRprocs: at iteration 1 [HFloat(-0.973448865779436) HFloat(-1.973448865779436)]
NRprocs: at iteration 2 [HFloat(-0.9727013515968866) HFloat(-1.9727013515968865)]
NRprocs: at iteration 3 [HFloat(-0.9727007668592758) HFloat(-1.9727007668592722)]
NRprocs: at iteration 4 [HFloat(-0.9727007668589176) HFloat(-1.972700766858918)]

sol := Vector(2, {(1) = -.9727007668589176, (2) = -1.972700766858918}, datatype = float[8])

HFloat(1.1102230246251565e-16)

sol := NRprocs(fl);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

sol := Vector(2, {(1) = -.9727007668589175, (2) = -1.9727007668589187}, datatype = float[8])

HFloat(6.661338147750939e-16)

sol := NRprocs(fl,ftol=1e-8);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

sol := Vector(2, {(1) = -.8975034244826969, (2) = 3.9496554751901094}, datatype = float[8])

HFloat(3.3306690738754696e-16)

sol := NRprocs(fl,initialpoint=<-I,I>,allowcomplex=true);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

sol := Vector(2, {(1) = -.13507476585608832-.9226174570773964*I, (2) = 2.865935679636024-.7040591486912059*I}, datatype = complex[8])

HFloat(2.268185639309195e-12)

sol := NRprocs(fl,allowcomplex=true);

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

sol := Vector(2, {(1) = .3864813801943507-.6844281432614738*I, (2) = -.5067551915310455+0.15920328832212956e-1*I}, datatype = complex[8])

HFloat(8.281264562981505e-12)

# deliberate errors
NRprocs([f,g,(a,b)->a+b]);

NRprocs([(a,b,c)->a*b*c]);

Error, (in NRprocs) number of procedures 3 does not match number of their parameters 2

Error, (in NRprocs) number of procedures 1 does not match number of their parameters 3

fl:=[x->cos(x)-x]:

CodeTools:-Usage( NRprocs(fl) );

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

memory used=372.34KiB, alloc change=0 bytes, cpu time=15.00ms, real time=7.00ms, gc time=0ns

Vector[column]([[.739085133215161]])

HFloat(0.0)

fl:=[(x,y,z)->x*z+2*y^2-sqrt(z),(x,y,z)->z+x*y,(x,y,z)->x^3+y*z]:

CodeTools:-Usage( NRprocs(fl) );

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

memory used=3.28MiB, alloc change=0 bytes, cpu time=63.00ms, real time=66.00ms, gc time=0ns

Vector[column]([[.414213562373095], [-.414213562373095], [.171572875253810]])

HFloat(1.1102230246251565e-16)

jfl:=Matrix(3,3,(i,j)->D[j](fl[i])):

CodeTools:-Usage( NRprocs(fl,jacobian=jfl) );

if type(%,Vector) then

  max(seq(abs(fl[i](seq(%[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

memory used=0.64MiB, alloc change=0 bytes, cpu time=15.00ms, real time=13.00ms, gc time=0ns

Vector[column]([[.414213562373095], [-.414213562373095], [.171572875253810]])

HFloat(5.551115123125783e-17)

fl:=[proc(x::float,y::float,z::float) x*z+1.9*y^2-z^2; end proc,

     proc(x::float,y::float,z::float) z+0.9*x*y; end proc,

     proc(x::float,y::float,z::float) x^3+0.9*y*z; end proc]:

st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused):

jfl:=Matrix(3,3,(i,j)->D[j](fl[i])):

sol:=CodeTools:-Usage( NRprocs(fl,jacobian=jfl,maxtries=50) );

if type(sol,Vector) then

  max(seq(abs(fl[i](seq(sol[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

memory used=9.20MiB, alloc change=0 bytes, cpu time=187.00ms, real time=182.00ms, gc time=0ns

sol := Vector(3, {(1) = 2.1111111111111107, (2) = -2.3456790123456783, (3) = 4.4567901234567895}, datatype = float[8])

HFloat(3.552713678800501e-15)

fl:=[proc(x::float,y::float,z::float) x*z+1.8*y^2-z^2; end proc,

     proc(x::float,y::float,z::float) z+0.8*x*y; end proc,

     proc(x::float,y::float,z::float) x^3+0.8*y*z; end proc]:

jfl:=Matrix(3,3,(i,j)->D[j](fl[i])):

cfl:=[seq(Compiler:-Compile(fl[i]),i=1..nops(fl))]:

cjfl:=map(t->`if`(type(t,procedure),Compiler:-Compile(t),t),jfl):

sol:=CodeTools:-Usage( NRprocs(cfl,jacobian=cjfl) );

if type(sol,Vector) then

  max(seq(abs(fl[i](seq(sol[j],j=1..nops(fl)))),i=1..nops(fl)));

end if;

memory used=41.43MiB, alloc change=0 bytes, cpu time=920.00ms, real time=884.00ms, gc time=78.00ms

sol := Vector(3, {(1) = -1.2499999999999998, (2) = 1.5624999999999998, (3) = 1.5624999999999996}, datatype = float[8])

HFloat(4.440892098500626e-16)

NRprocs([f,g],method=hybrid);

Vector[column]([[-.972700766858918], [-1.97270076685891]])

[f, g](seq(a, a=%)); # check whether it's a root

[HFloat(0.0), HFloat(7.771561172376096e-16)]

NRprocs([f,g],method=fdiff);

Vector[column]([[-.972700766858918], [-1.97270076685892]])

[f, g](seq(a, a=%)); # check whether it's a root

[HFloat(-1.1102230246251565e-16), HFloat(0.0)]

 

 

Download fsolve_procs.mw

acer

In a comment to a Mapleprimes thread, Jacques mentioned an old suggestion of Kahan's that numerical computations should return an estimate of conditioning alongside a result.

I mentioned in this comment an approach for numerical estimation of (all) roots of a univariate polynomial with real or complex numeric coeffficients that is based upon computing eigenvalues of a companion matrix. Here below is some rough code to inplement that idea, but which also returns condition number estimates associated with the eigenvalues.

I include an example of the badly conditioned Wilkinson's polynomial. It is possible that better results could be obtained by using a Lagrange basis representation of that polynomial, but I didn't try to figure out how that would work in an analogous way. The standard Maple utility, fsolve, has no problem with this example.

All this week:

This forum question led to a discussion of a bitwise magazine review that compared Mathematica 5.2 and Maple 10. In that review the author struggled to get the following numeric integral to compute accurately and quickly in Maple.

evalf(Int(BesselJ(0, 50001*x)*x*exp(I*(355*x^2*1/2)), x = .35 .. 1));

Below, I reproduce an attempt at computing an accurate result quickly in Maple. I'm copying it here because that thread got quite long and messy.

First 22 23 24 25 26 27 28 Page 24 of 29