acer

32313 Reputation

29 Badges

19 years, 313 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I realized that an obvious improvement would be to generate each try's initial point inside a complex hyperbox specified by the user.

As originally written, each initial point is taken from the hyperbox [-1.0-1.0*I..1.0+1.0*I]'^n or [-1.0..1.0]^n. That's going to be weak in general, as the set of points in the box might be distinct from the set which converge to some roots.

In a related way, it would be an improvement to allow the user to supply the bound of those boxes, in order to also be able to search only for roots lying inside it.

So an additonal optional parameter, a list of Maple ranges, could specify the box used for both those purposes. For each dimension, the ends of the range could specify the lower-left and upper-right (possibly complex) corners.

Picking a finite box well, when not specified in the arguments, will be tricky.

acer

The question of how the Array will eventually get used it important here.

Do you want to shift the costs to element assignment time, or access time? Do you want to pay efficiency costs in access/assignment time or in storage? Do you expect most or few entries to get accessed/assigned at some point?

I believe that rtable indexing-functions are flexible enough (sometimes with ingenuity required) to control those aspects and to choose where to put the cost.

acer

I posted a routine for this here.

acer

I posted a routine for this here.

acer

Hi Axel,

I was interested in this method for a few reasons. But none of those is a really sharp focus, so the blog entry serves as a rough reminder to me too! Here are some of those points of interest for me:

It's supposedly the method that the Matlab `roots` function uses, I think. It can give an idea of the conditioning of roots, which isn't something that one sees often. It uses a technique very similar to that used by RootFinding:-BivariatePolynomial, which is functionality that I like (it can find all the roots!) but who's source code might be made much more efficient, I suspect. I think that the technique and theory is neat, even if it'll never be the fastest in Maple or the most robust method around.

I just like polynomial rootfinders.

acer

The original poster mentioned that the vector should be "not of a known length or known values." Your example does not match that abstract description.

acer

The original poster mentioned that the vector should be "not of a known length or known values." Your example does not match that abstract description.

acer

It occurred to me that someone might want such a fast hardware J0 to be utilized even in cases where the original `BesselJ` function was hard-bound into some other Maple Library routine.

By all that I mean the case where some other Library routine already had reference to BesselJ in it when it was read from source and saved into the Maple Library. In such a case it is the global name BesselJ which has been bound, prior to that other routine being saved. So it wouldn't change the behaviour if, say, I loaded a module which exported its own BesselJ. No, that would only affect the instances where I typed in BesselJ in my session, with that export's binding in effect. It wouldn't change the behaviour for those routines which were saved with reference to the global name in them already. In order to affect those routines I would instead have to redefine the global BesselJ name.

So below I show an example of that. I would advise that you don't run the code unless you understand what it does. I wouldn't want you inadvertantly to wreck the BesselJ in your installed Maple Library.

What the code below does is unprotect BesselJ, and in its place save a version that calls the fast external library when n=0 and x::numeric, and in all other cases call the original BesselJ. The original BesselJ can be referenced due to having copied it right at the start.

Since external calls may be session dependent, it should not be possible to properly save the call_external. So instead the Maple-level entry routine to the fast external function will have to redefine itself whenever it gets called the first time in a new session. This is brought about with a ModuleLoad action. See here for some comments on that.

> hwBJ := module()
> option package;
> export hwBesselJ0;
> local ModuleLoad;
>   ModuleLoad := proc()
>     # The OS libm.so may have double-precision Bessel J0.
>     unprotect(hwBesselJ0);
>     hwBesselJ0 := define_external('j0',
>        'r'::float[8],'RETURN'::float[8] ,
>        LIB="/usr/lib64/libm.so"):
>     protect(hwBesselJ0);
>   end proc;
> end module:
>
> __origBesselJ := copy(eval(BesselJ)):
> __origBesselJ(0,5.6);
                                 0.02697088469
>
> unprotect(BesselJ):
> BesselJ := proc(n,x)
>   if n=0 and Digits<=trunc(evalhf(Digits))
>    and type(x,'numeric') then
>     hwBJ:-hwBesselJ0(x);
>   else
>     __origBesselJ(n,x);
>   end if;
> end proc:
>
> march(create,"hwBJ.mla");
> libname:="hwBJ.mla",libname;
          libname := "hwBJ.mla", "/home/acer/maple/maple11.02/lib"

Now I can save all three parts: the module which sets up the call_external and then redefines its own export to use it, the copy of the original BesselJ, and my new modified BesselJ.

> savelib(hwBJ);
> savelib(__origBesselJ);
> savelib(BesselJ);

Now I can restart, and test it. I want it to use the hardware J0 when Digits is not high, and when the input is real and numeric, but otherwise I want the usual BesselJ behaviour. If I were being really careful I would have tested the hardware J0 to find out for what range of values it is accurate, and then restricted its use to that range

> restart:
> libname:="hwBJ.mla",libname;
          libname := "hwBJ.mla", "/home/acer/maple/maple11.02/lib"
 
>
> BesselJ(0,100.0);
                             0.0199858503042231184
 
> evalf(BesselJ(0,17));
                                 -0.1698542522
 
> BesselJ(0,y);
                                 BesselJ(0, y)
 
> BesselJ(1,100.0);
                                -0.07714535201
 
> Digits:=30: BesselJ(0,100.0); Digits:=10:
                       0.0199858503042231224242283909508
 
>
> evalhf(eval(BesselJ(0,17)));
                             -0.169854252151183549

At this point I should mention that I don't offhand know of places where BesselJ is actually hardcoded into Library sources. But these methods should be feasible for other special functions as well. And I do know that, for example, GAMMA is coded into all sorts of routines including some for floating-point calculations.

It might be fun to work out robust Maple platform-independent code to reliably replace a useful key set of special functions (when used at hardware or lower precision, and over ranges where they are accurate).

acer

It's not really like how the GMP (GnuMP) library is used, as that is quite tightly tied into the Maple kermel which is linked directly against it and since its data structures don't appear directly at the Maple Library level.

It is much closer to how the NAG library is used from the Maple Library, where a external library specialized in numerics is dynamically opened. The data may be visible at the Library level (hardware arrays and, now, even hardware scalars) and some Library routine decides (algorithmic switching, often invisible to the user) on usage according to problem type.

acer

I'm sorry that I don't understand the physical problem well enough to comment on your alternative method.

You asked how to get the operator body. You had this,

power:=theta -> 4*p[0]*BesselJ(1, k*r*sin(theta))^2/(k^2*r^2*sin(theta)^2);

Since the only place that theta is used in `power` is in the calls to `sin` (which returned unevaluated when passed a name) you could just call power(theta).

> power(theta);
                                                       2
                      4 p[0] BesselJ(1, k r sin(theta))
                      ----------------------------------
                               2  2           2
                              k  r  sin(theta)

Trying to be fancy..


> op(map(FromInert, subs({_Inert_PARAM(1)=_Inert_NAME("theta")},
> indets( ToInert(eval(power)), specfunc(anything,_Inert_STATSEQ) ))));
                                                       2
                      4 p[0] BesselJ(1, k r sin(theta))
                      ----------------------------------
                               2  2           2
                              k  r  sin(theta)

acer

I'm sorry that I don't understand the physical problem well enough to comment on your alternative method.

You asked how to get the operator body. You had this,

power:=theta -> 4*p[0]*BesselJ(1, k*r*sin(theta))^2/(k^2*r^2*sin(theta)^2);

Since the only place that theta is used in `power` is in the calls to `sin` (which returned unevaluated when passed a name) you could just call power(theta).

> power(theta);
                                                       2
                      4 p[0] BesselJ(1, k r sin(theta))
                      ----------------------------------
                               2  2           2
                              k  r  sin(theta)

Trying to be fancy..


> op(map(FromInert, subs({_Inert_PARAM(1)=_Inert_NAME("theta")},
> indets( ToInert(eval(power)), specfunc(anything,_Inert_STATSEQ) ))));
                                                       2
                      4 p[0] BesselJ(1, k r sin(theta))
                      ----------------------------------
                               2  2           2
                              k  r  sin(theta)

acer

At the author's special request due to licensing concerns, this comment has been replaced by

this blog entry

Will

At the author's special request due to licensing concerns, this comment has been replaced by

this blog entry

Will

Sure. What surprised me here was that both input objects started off a the table reference p[0], and to both I did the same context-menu action (or Format menu), and the result was two atomic identifiers which Maple did not see as the same name.

acer

Sure. What surprised me here was that both input objects started off a the table reference p[0], and to both I did the same context-menu action (or Format menu), and the result was two atomic identifiers which Maple did not see as the same name.

acer

First 547 548 549 550 551 552 553 Last Page 549 of 591