acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

There are a few things that might be holding Maple back here.

One of those is that the compiled external (NAG) routine d01akc which specializes in oscillatory non-singular integrands has a  `max_num_subint` parameter which is not exposed at the user-level in Maple. The accuracy tolerances are exposed, via evalf/Int's `epsilon` parameter. But the maximal number of allowed subintervals is not so exposed. So that specialized routine will fail for an integral behaving like BesselJ(0,50001*x) when it attempts to use more than 200 (a hard coded default value) subintervals. With 200 subintervals only, one can only request an epsilon of about 5e-2 or so for the integrand at hand to succeed. I didn't try to subvert Maple and so test whether that routine could handle 50001*x in reasonable time, if allowed a very high number of subintervals.

Not having a super fast compiled hardware floating-point BesselJ0 may also affect the performance. Axel Vogt made some interesting posts and comments here a while back about fast BesselK and using that for numeric quadrature.

acer

There are a few things that might be holding Maple back here.

One of those is that the compiled external (NAG) routine d01akc which specializes in oscillatory non-singular integrands has a  `max_num_subint` parameter which is not exposed at the user-level in Maple. The accuracy tolerances are exposed, via evalf/Int's `epsilon` parameter. But the maximal number of allowed subintervals is not so exposed. So that specialized routine will fail for an integral behaving like BesselJ(0,50001*x) when it attempts to use more than 200 (a hard coded default value) subintervals. With 200 subintervals only, one can only request an epsilon of about 5e-2 or so for the integrand at hand to succeed. I didn't try to subvert Maple and so test whether that routine could handle 50001*x in reasonable time, if allowed a very high number of subintervals.

Not having a super fast compiled hardware floating-point BesselJ0 may also affect the performance. Axel Vogt made some interesting posts and comments here a while back about fast BesselK and using that for numeric quadrature.

acer

I suspect that he was referring to g the acceleration due to gravity, or a force on a body of mass m under such acceleration.

acer

I suspect that he was referring to g the acceleration due to gravity, or a force on a body of mass m under such acceleration.

acer

If we're lucky, someone will explain its relationship to the lambda calculus in detail.

Jacques has written in the past that, "Maple's unapply is the same as Church's lambda abstraction operator."

The help-page ?unapply says,

- The unapply command implements the lambda-expressions of lambda calculus.

For reference see ``An Implementation of Operators for Symbolic Algebra
Systems'' by G.H. Gonnet, SYMSAC July 1986.

What I wonder is whether the help ever claimed that, "The scoping behaviour of unbound names is not the same in the lambda calculus," and if so how that might be true.

acer

I was sitting there, wondering why eval(foo:-`:-2`) didn't work. But of course, it's a local name. So of course I can't eval what I type in as the global name.

> restart:
> read("bar.m");
> tmp:=[anames()];
                               tmp := [foo:-:-2]
 
> dismantle(tmp[1]);
 
NAME(4): foo:-`:-2` #[modulename = foo]
 
> eval(foo:-`:-2`);
Error, `foo` does not evaluate to a module
> eval(tmp[1]);
                   proc() print("can you see it?") end proc

So thank you very much. It is march('extractfile',...) that does what i was after.

acer

Yes, that it what I was trying to figure out, thanks.

There is a (not insurmountable) difficulty if the .mla archive has many module members stored in it, as those all appear as ":-XXX.m" where XXX is a posint when listed by march. There may be many such archive members, but I don't mind searching, if it can be done programmatically. The key thing for me is that I don't want the module to be referenced and the ModuleLoad routine to get run.

In my simple experiment, I could extract ":-2.m" to a file, which I called "bar.m". It was the only module member in the foo.mla archive.

But I don't see how I can "load" that bar.m file. I can `read`() it, but then what would I look for?

> restart:
> read("bar.m");
> anames();
                                   foo:-:-2
 
> lprint(%);
foo:-`:-2`
> eval(foo:-`:-2`);
Error, `foo` does not evaluate to a module

I guess that this is a special case of a wider question: how can one view, individually, the contents of the ":-XXX.m" module members that are stored in a .mla archive?

acer

No, stopat() and trace() don't take effect until after the ModuleLoad executes, as can be shown by experiment.

But I realized a little later, that printlevel can be set high before the module name is first accessed and its ModuleLoad tripped.

> restart:
> libname:="./foo.mla",libname:
> kernelopts(opaquemodules=false):
> printlevel:=1000:
> foo();
{--> enter ModuleLoad, args =
                               "can you see it?"
 
<-- exit ModuleLoad (now at top level) = }
{--> enter evalapply, args = module () local ModuleLoad; option package;
ModuleLoad := proc () print("can you see it?") end proc; end module, []
{--> enter type/attributed, args = module () local ModuleLoad; option package;
ModuleLoad := proc () print("can you see it?") end proc; end module, generic
                                     false
 
<-- exit type/attributed (now in evalapply) = false}
          (module() local ModuleLoad; option package;  end module)()
 
<-- exit evalapply (now at top level) = module () local ModuleLoad; option
package; ModuleLoad := proc () print("can you see it?") end proc; end module()
}
          (module() local ModuleLoad; option package;  end module)()

acer

The one-time cost of defining all the StringTools exports as their session-dependent call_externals may be negligible compared to the cost of the initial dlopen of the mstring dynamic library.

Here is a crude illustration, done in order in a fresh TTY session.

> st:=time():
> try StringTools:-Join(): catch: end try:
> time()-st;
                                     0.002
 
> st:=time():
> try
> for j in exports(StringTools) do
>   StringTools[j]():
> end do:
> catch: end try:
> time()-st;
                                     0.001

> st:=time():
> try
> for j in exports(StringTools) do
>   StringTools[j]():
> end do:
> catch: end try:
> time()-st;
                                      0.

Ignoring effects due to try..catch overhead and the cost of raising errors, and assuming that the timer is accurate at such small granularity, it looks like the initial dlopen costs 2/3's as much as initializing/redefining all of the (approximately 200) StringTools exports.

It may only be for packages with many exports (which each need redefining) that having ModuleLoad deal with them all at once on the first access might be undesirable. So it may be reasonable to have StringTools:-ModuleLoad initialize them all, without any change to the persistent store mechanism.

acer

Yes, these waters can get muddy. I guessed that he was having problems with implicit multiplication because I had to fill in the missing `*`s when using the 1D input from the properties of his post's gif image. But I wasn't sure that it wasn't an artefact of some conversion by mapleprimes. So I just filled in the corrections and added a sidenote. Implicit multiplication, and 2D Math input in general, is harder to debug when the code is not one's own because of such additonal ambiguities. I suppose that his claim about failing with combine was stronger evidence.

May I ask, how did you yourself get his expression into Maple? Is there some easier way, when the 2D Math gets put up here in a post as an image?

acer

Yes, these waters can get muddy. I guessed that he was having problems with implicit multiplication because I had to fill in the missing `*`s when using the 1D input from the properties of his post's gif image. But I wasn't sure that it wasn't an artefact of some conversion by mapleprimes. So I just filled in the corrections and added a sidenote. Implicit multiplication, and 2D Math input in general, is harder to debug when the code is not one's own because of such additonal ambiguities. I suppose that his claim about failing with combine was stronger evidence.

May I ask, how did you yourself get his expression into Maple? Is there some easier way, when the 2D Math gets put up here in a post as an image?

acer

I ran the above code with Digits=32, in Maple 11 on 64bit Linux. After 1880 iterations through the loop Maple claimed to have allocated 80 million words of memory while the OS `top` utility showed the resident memory allocation slowly climb to 750MB.

That indicates a memory leak, likely related to software float external calling. It may be similar to the what was reported here for Maple 10.

acer

I ran the above code with Digits=32, in Maple 11 on 64bit Linux. After 1880 iterations through the loop Maple claimed to have allocated 80 million words of memory while the OS `top` utility showed the resident memory allocation slowly climb to 750MB.

That indicates a memory leak, likely related to software float external calling. It may be similar to the what was reported here for Maple 10.

acer

It's not true that, "when the number of equations equals the number of unknowns there was a unique solution."

There are three possibilities, when there are n linear equations in n unknowns.

The first possibility is that there are no solutions. The system of equations is usually called inconsistent in this situation. An example could be something like this,

x + y = 2;
2*x + 2*y = 11;

A second possibility is that there are infinitely many solutions. This is often called an underdetermined system. It can arise when one of the equations is a multiple of another (or a linear combination of several other) of the equations. As a result of that, there is not enough data to forcibly pin the variables down to single unique solution values. A simple example is this,

x + y = 2;
2*x + 2*y = 4;

The third possibility is that there is a unique solution.

A discipline of mathematics that formalizes all the above and provides ways to look at and get insight into it is Linear Algebra. Representing the linear multivariate equations as a Matrix, and manipulating that object, can provide nice neat ways to demonstrate which of the three situations above (related to the number of possible solutions) is the case at hand for given data.

acer

It's not true that, "when the number of equations equals the number of unknowns there was a unique solution."

There are three possibilities, when there are n linear equations in n unknowns.

The first possibility is that there are no solutions. The system of equations is usually called inconsistent in this situation. An example could be something like this,

x + y = 2;
2*x + 2*y = 11;

A second possibility is that there are infinitely many solutions. This is often called an underdetermined system. It can arise when one of the equations is a multiple of another (or a linear combination of several other) of the equations. As a result of that, there is not enough data to forcibly pin the variables down to single unique solution values. A simple example is this,

x + y = 2;
2*x + 2*y = 4;

The third possibility is that there is a unique solution.

A discipline of mathematics that formalizes all the above and provides ways to look at and get insight into it is Linear Algebra. Representing the linear multivariate equations as a Matrix, and manipulating that object, can provide nice neat ways to demonstrate which of the three situations above (related to the number of possible solutions) is the case at hand for given data.

acer

First 549 550 551 552 553 554 555 Last Page 551 of 591