acer

32373 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Hi Axel,

I'm not sure that it is ok to measure the accuracy of the approximant by evaluating it under Digits=18 of radix-10 working precision. The Questioner mentioned using the result as C source.

Please pardon any mistakes, but here is some attempt at accuracy measurement:

 

restart:

H := proc(z) option inline;
 (.466433940928848490e-3+(1.11523043757015811+(-1.72989528300231501
 +(4.49729788979802074+(-.319393610864014770+(.545253820724829192e-2
 -.228858722931549036e-4*z)*z)*z)*z)*z)*z)/(.139930182278641728e-2
 +(3.34513159198135990+(-6.52796503942793992+(15.5613951451798833+
 (-6.34533642326600689+(.374356830875784296+(-.590832471659976894e-2
 +(.235408172559887276e-4+.461516094568396518e-9*z)*z)*z)*z)*z)*z)*z)*z);
end proc:
approx:= proc(x::float) option inline; H(x^2)*x^3+x end proc:
capprox:=Compiler:-Compile(approx):

J := proc(x::float)
  local t1;
  t1 := x^2;
  (.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
   +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
   /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
   +x;
end proc:
cJ:=Compiler:-Compile(J):

populate_approx:=proc(V::Vector(datatype=float[8]),
                      N::integer[4])
  local i::integer[4], q::float[8], x::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
     x:=(i-1.0)*q;
     V[i]:=approx(x);
  end do:
  NULL:
end proc:
cpopulate_approx:=Compiler:-Compile(populate_approx):

populate_J:=proc(V::Vector(datatype=float[8]),
                 N::integer[4])
  local i::integer[4], q::float[8], x::float[8], t1::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
    x:=(i-1.0)*q;
    t1 := x^2;
    V[i]:=(.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
           +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
           /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
           +x;
  end do:
  NULL:
end proc:
cpopulate_J:=Compiler:-Compile(populate_J):

populate_approx_diff:=proc(V::Vector,N::posint)
   local i, q, x;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(capprox(x)-tan(x));
   end do:
   NULL:
end proc:

populate_J_diff:=proc(V::Vector,N::posint)
   local i, q, x, t1;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(cJ(x)- tan(x));
   end do:
   NULL:
end proc:

 

We can compute a large number of values, using these candidate replacements of the tangent function. By inlining

the formulae right into procedures that do the calculations inplace on float[8] Vectors we can use Compiler:-Compile

and obtained compiled (from C) programs that run until finished without pausing to call back to Maple.

 

So these next timings illustrate how quickly the function can do (nc=10^7) executions, when compiled from C source.

There is just the initial overhead of making the external call, but no extra overhead per individual execution of the

compiled tangent candidate functions.

 

Unsurprisingly, compiled `J` is faster to populate the double precision array, as it does fewer arithmetic operations

than does compiled `approx`. (Another potential benefit of "smaller" code may be the it's easier to shoehorn into

the limited space on an embedded device.)

 

nc:=10^7:

v1:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_approx(v1,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=375.00ms, real time=368.00ms

v2:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_J(v2,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=234.00ms, real time=238.00ms

 

Now we wish to compute the absolute errors of our candidate functions.

 

If we compared with the values of tan(x) computed at the same hardware double precision then we would not be taking

into account that values from the double precision builtin `tan` also have some error. We want to compare out candidate

functions against more correct values of tangent, and avoid inadvertantly conjoining any errors in the builtin. Hence we

should raise the working precision when computing the baseline tangent values, and it's also a good idea to do the

subtraction at higher precision. Below, we'll use Digits=30, which ought to suffice.

 

But it's important to note that the candidate replacements for tangent should themselves not be run under higher working

precision, since the end goal is to use then in compiled C, say. The easiest way to get their values is to run Maple's own

Compiler:-Compile against them, which generates something similar to CodeGeneration[C] and then compiles/links to

a shared library that Maple can access via its external calling. All that process is normally hidden, by the Compiler.

 

Digits:=30:

nc:=10^5:

v1diff:=Vector(nc,datatype=float[8]):
populate_approx_diff(v1diff,nc):

v2diff:=Vector(nc,datatype=float[8]):
populate_J_diff(v2diff,nc);

 

We can find the largest absolute error, in both of those collections of values computed. These are very close, about

2.5 ulps each.

 

max(v1diff), max(v2diff);

0.252205478780976976e-15, 0.248056073686935993e-15

 

We can compute the average absolute error, in both of those collections of values computed. Again, they are quite close.

 

[add(t, t in v1diff), add(t, t in v2diff)]/nc: evalf[5](%)[];

0.28429e-16, 0.28208e-16

 

We can also plot the two collections of absolute errors.

 

xpts:=Vector(nc,i->evalhf((i-1.0)*Pi/(4.0*nc)),datatype=float[8]):

plots:-display(Array([
    plot(Matrix([xpts,v1diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1),
    plot(Matrix([xpts,v2diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1)
                      ]));

 

Download compiledcompared2.mw

acer

Code that produced the second procedure:

restart:

corra,corrm:=x,x^3:

usefun:=proc(T) local t;
   Digits:=70;
   t:=sqrt(T);
   evalf((tan(t)-eval(corra,x=t))/eval(corrm,x=t));
end proc:

Digits:=19:

this:=numapprox:-minimax('usefun'(x),x=0..sqrt(Pi/3),[4,2],1,'maxerror'):

ans:=subs(x=x^2,this)*corrm+corra:
tanfun:=unapply(evalf[16](ans),[x::float],'proc_options'=[]):
tanfun:=codegen[optimize](tanfun);

Note that generating such approximations at a high working precision (eg. Digits=19, or 24, or 40, etc) and then rounding the coefficients to 16 (or 18) decimal digits is a bit dodgy. The final rounded procedure will not give the same accuracy as the original one with the high number of decimal digit coefficients, and that is even more so if the rounded procedure is compiled while the original is run at some higher working precision. So the value of `maxerror` as computed above is not a promise of accuracy about the final approximant. It's not even clear that finding the pair [m,n] of degrees for numerator & denominator which produce a minimal `maxerror` will lead to the most accurate procedure after rounding coefficients to 16 or so places.

Code that produced the second procedure:

restart:

corra,corrm:=x,x^3:

usefun:=proc(T) local t;
   Digits:=70;
   t:=sqrt(T);
   evalf((tan(t)-eval(corra,x=t))/eval(corrm,x=t));
end proc:

Digits:=19:

this:=numapprox:-minimax('usefun'(x),x=0..sqrt(Pi/3),[4,2],1,'maxerror'):

ans:=subs(x=x^2,this)*corrm+corra:
tanfun:=unapply(evalf[16](ans),[x::float],'proc_options'=[]):
tanfun:=codegen[optimize](tanfun);

Note that generating such approximations at a high working precision (eg. Digits=19, or 24, or 40, etc) and then rounding the coefficients to 16 (or 18) decimal digits is a bit dodgy. The final rounded procedure will not give the same accuracy as the original one with the high number of decimal digit coefficients, and that is even more so if the rounded procedure is compiled while the original is run at some higher working precision. So the value of `maxerror` as computed above is not a promise of accuracy about the final approximant. It's not even clear that finding the pair [m,n] of degrees for numerator & denominator which produce a minimal `maxerror` will lead to the most accurate procedure after rounding coefficients to 16 or so places.

@PatrickT If the Array of color values has less entries than is needed for the grid or mesh of the plot structure's surface representation then one can get that error message from the GUI, and the surface may even be rendered as entirely black.

If your create a 3D plot with m*n surface data points (by passing the grid=[m,n] option to plot3d, say) then the float[8] Array of color values has to contain at least three times as many values, ie m*n*3 entries. Each data point on the surface gets a triple of values when coloring the surface by the RGB scheme. These triples can be taken from the RGB layers of an image Array.

If you start off with a 600x400 pixel image then you'll need to do at least these things: 1) create a 3D plot using the grid=[m,n] option where m and n are in a same ratio as 600 and 400, and 2) resize the image's Array to match the dimensions m*n*3. That's why I like using ImageTools:-Preview, since it rejigs the Array, and can set the size to match what's in the plot.

You should be able to use single forward-slashes instead of double backslashes in Maple on Windows. It works for me.

Perhaps you were using the Classic GUI when you avoided implosion during the animation.

The image of planet Mars which your comment cites, as URL, is not the right projection for the simple coords=cylindrical that John showed. And it's not just that the view of Mars does not have the right projection. The image also has a lot of extraneous white space in its wide side-borders, and text, etc. All of that is what would get used in the transferral, and of course that's not what you want.

The site that John mentioned also has images of other planets of our solar system.

 

restart:

 

http://www.vendian.org/mncharity/dir3/planet_globes/TemporaryURL/mars0_src_smaller.jpg

im:=ImageTools:-Read(cat(kernelopts('homedir'),
                     "/Downloads/mars0_src_smaller.jpg")):

n := 128;

128

p:=ImageTools:-Preview(ImageTools:-Transpose(im),2*n,n):

q:=plot3d(1, x=0..2*Pi, y=0..Pi, coords=spherical, style=surface,

          grid=[2*n,n]):

plots:-display(PLOT3D(MESH(op([1,1],q), op([1,4..-1],p)), op(2..-1,q)));

Download marsglobe1.mw 

 

@Axel Vogt Thanks, Axel. I believe that you once cited that document once before (somewhere), because at the time it got me interested in Horner's scheme for Matrix polynomial evaluation (at a point value for variable x), which I have code for somewhere... And that could lead to Estrin's form, and parallelization. But I digress.

I did not divide by x^3, and only subtracted by the x term when computing the initial minimax on the road to the result above. (But my motivation was also the Taylor's expansion...) A slight modification pretty easily got me what appears to be a more accurate double-precision performer as this shorter result,

proc(x::float)
 local t1;
 t1 := x^2;
 (.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
  +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
  /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
  +x;
end proc

Getting some reasonably simple result that could attain, oh say, 0.6 ulps maximal error on [0..Pi/4] would be nice to show as Maple code.

acer

@Axel Vogt Thanks, Axel. I believe that you once cited that document once before (somewhere), because at the time it got me interested in Horner's scheme for Matrix polynomial evaluation (at a point value for variable x), which I have code for somewhere... And that could lead to Estrin's form, and parallelization. But I digress.

I did not divide by x^3, and only subtracted by the x term when computing the initial minimax on the road to the result above. (But my motivation was also the Taylor's expansion...) A slight modification pretty easily got me what appears to be a more accurate double-precision performer as this shorter result,

proc(x::float)
 local t1;
 t1 := x^2;
 (.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
  +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
  /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
  +x;
end proc

Getting some reasonably simple result that could attain, oh say, 0.6 ulps maximal error on [0..Pi/4] would be nice to show as Maple code.

acer

I'm not sure whether the grid/cell structure in use is related to the differing behaviours. The attached worksheet gives another variable view, and overlays with the `outlines` option. It seems to me that for some values of the parameters there may be visual indications of a tie-in between the cell structure and the behaviours. But as Alejandro mentioned, the source code in question may be compiled and unavailable to view, so that can only be conjecture.

interpexplore.mw

acer

Perhaps this simpler example, animated, reveals what is going on.

As the two parallel lines (seen when `a` is farther away from the value 0) get closer together there is a point when something curious happens.

The distance between the computed cells in each of the two solution lines becomes greater than (something like) the distance between those two lines. And for `a` less than that critical value, the routine appears to be identifying the solution as a set of islands instead of as two lines. Points from both lines suddenly start getting identified as being within the same cell. (I'm not sure whether "cell" is the right term here. It is used in the ?implicitplot documentation, and possibly this mirrors what happens in the Arrays in the CURVES section of the generated plot structure.)

This effect can be delayed a bit, by using a huge `numpoints` and large `gridrefine` setting. But eventually, as the constant term gets smaller the curiosity occurs.

And then, as `a` continues to shrink toward 0 value, those solution islands shrink to points (and might even vanish).

plots:-animate(plots:-implicitplot,[(x-y)^2+a, x = 0 .. 1,
                                     y = 0 .. 1],
               a=-0.01..0.0, frames=100);

(This is essentially the same as Preben's animation, simplified and with a different slope.)

acer

@ThU Well, if you only ever want the shared colour coding scheme to be ZHUE (hue determined by z-value) then there is a much easier way. The stuff I wrote above is something I cooked up with more general coloring schemes in mind.

 

restart:
f1 := (x, y) -> sin((1/2)*y+(1/2)*x):

f2 := f1+1:

plots:-display(plot3d(f1, 0..10, 0..20),
               plot3d(f2, 0..10, 0..20),shading=zhue,
               orientation=[30,70,10]);

 

Download sharedzhue.mw

@ThU Well, if you only ever want the shared colour coding scheme to be ZHUE (hue determined by z-value) then there is a much easier way. The stuff I wrote above is something I cooked up with more general coloring schemes in mind.

 

restart:
f1 := (x, y) -> sin((1/2)*y+(1/2)*x):

f2 := f1+1:

plots:-display(plot3d(f1, 0..10, 0..20),
               plot3d(f2, 0..10, 0..20),shading=zhue,
               orientation=[30,70,10]);

 

Download sharedzhue.mw

@Markiyan Hirnyk Yes, I thought that was obvious.

I changed f2 from f1+2 to be f1+1, so that there was overlap. I think that the overlap illustrates better that the colors match for any given shared z-value, since with overlap more z-values (than just the singleton z=1) are shared.

And, of course, maxz and minz must get changed accordingly, if they are to be force fed for an example. I gave the appropriate maxz and minz for my example, with the different f2=f1+1.

This also relates to having a more general scheme (mine given at top being non-bug-free, I'm sure) which can extract the maximal and minimal z-values automatically.

@Markiyan Hirnyk Yes, I thought that was obvious.

I changed f2 from f1+2 to be f1+1, so that there was overlap. I think that the overlap illustrates better that the colors match for any given shared z-value, since with overlap more z-values (than just the singleton z=1) are shared.

And, of course, maxz and minz must get changed accordingly, if they are to be force fed for an example. I gave the appropriate maxz and minz for my example, with the different f2=f1+1.

This also relates to having a more general scheme (mine given at top being non-bug-free, I'm sure) which can extract the maximal and minimal z-values automatically.

@ThU I tried to show a more general mechanism, where one does not know in advance what the max and min values -- global over both f1 and f2 combined -- will be. Hence I dug it out of the initial plots. If one knows those in advance then the coding can be simpler.

And it can also be done without all the `op` and data structure manipulation. And it can look a little simpler with expressions instead of operators (...I think). Note that I didn't take any special care for efficiency before, either.

 

restart:

f1 := sin((1/2)*y+(1/2)*x):
f2 := f1 + 1:

maxz, minz := 2, -1:

p1 := plot3d(f1, x=0..10, y=0..20,
             color=[(maxz-f1(x,y))/(maxz-minz), 0.75, 0.75,
                    colortype=HSV]):
p2 := plot3d(f2, x=0..10, y=0..20,
             color=[(maxz-f2(x,y))/(maxz-minz), 0.75, 0.75,
                    colortype=HSV]):

plots:-display(p1,p2,axes=box,orientation=[25,70,10]);

restart:

f1 := sin((1/2)*y+(1/2)*x):
f2 := f1 + 1:

maxz, minz := 2, -1:

F := fun -> plot3d(fun, x=0..10, y=0..20,
                 color=[(maxz-fun)/(maxz-minz), 0.75, 0.75,
                         colortype=HSV]):

plots:-display(F(f1),F(f2),axes=box,orientation=[25,70,10]);

 

 

Download colorfun2b.mw

@ThU I tried to show a more general mechanism, where one does not know in advance what the max and min values -- global over both f1 and f2 combined -- will be. Hence I dug it out of the initial plots. If one knows those in advance then the coding can be simpler.

And it can also be done without all the `op` and data structure manipulation. And it can look a little simpler with expressions instead of operators (...I think). Note that I didn't take any special care for efficiency before, either.

 

restart:

f1 := sin((1/2)*y+(1/2)*x):
f2 := f1 + 1:

maxz, minz := 2, -1:

p1 := plot3d(f1, x=0..10, y=0..20,
             color=[(maxz-f1(x,y))/(maxz-minz), 0.75, 0.75,
                    colortype=HSV]):
p2 := plot3d(f2, x=0..10, y=0..20,
             color=[(maxz-f2(x,y))/(maxz-minz), 0.75, 0.75,
                    colortype=HSV]):

plots:-display(p1,p2,axes=box,orientation=[25,70,10]);

restart:

f1 := sin((1/2)*y+(1/2)*x):
f2 := f1 + 1:

maxz, minz := 2, -1:

F := fun -> plot3d(fun, x=0..10, y=0..20,
                 color=[(maxz-fun)/(maxz-minz), 0.75, 0.75,
                         colortype=HSV]):

plots:-display(F(f1),F(f2),axes=box,orientation=[25,70,10]);

 

 

Download colorfun2b.mw

I would say it even more strongly: it is a much worse idea to try and integrate and differentiate from an interpolated set of discrete data values that it is to instead work directly with the differential system.

So, if one has an ode system in Maple then do not first extract discrete values of the solution and then interpolate, and then integrate/differentiate/root-find/etc. It is generally a terrible idea to do so, from a numerical standpoint.

For integraton and differentiation it is likely a better idea in general to augment the ode system, and one should be able to find several posts about that on this site. For multiple root-finding of some expression in terms of the solution it is generally better to make the extra effort and use dsolve,events.

Suppose that you want to integrate the solution to some ode. Three possible ways are: 1) generate discrete data, interpolate, and integrate that interpolating result, 2) obtain a procedure from dsolve/numeric and then use evalf/Int on that, and 3) augment the ode with a new variable and a new equation (which sets that new variable equal to the derivative of what you want integrated) and call dsolve/numeric on the augmented ode.

Of these ways, 1) is by far the worst, numerically, in general.

I believe that 3) is generally better than 2), numerically, because the dsolve/numeric engine knows best how to do its own error analysis and step-size control dynamically to attain any requested tolerance. Whereas passing a piecewise interpolating procedure (as returned by dsolve/numeric) on to some numeric integrator introduces an unnecessary break in the informational flow. The evalf/Int integrator will be generally restricted in its accuracy by the fixed quality of the returned dsolve/numeric procedure.

Differentiation near the end-points of the original region of interest will be especially difficult to do effectively using an interpolating scheme from a precomputed set of data from the numerically solved ode.

acer

First 397 398 399 400 401 402 403 Last Page 399 of 592