acer

32343 Reputation

29 Badges

19 years, 328 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I can search for the word nebuchadnezzar and get to this thread. So some of my posts have been processed so as to allow searching.

It could be that the reply containg shake and evalr wasn't so processed because the terms were in URLs, or were highlighted, or because some mechanism doesn't work as it once did. Or, possibly, because it seemed negative (but isn't that a slightly paranoid supposition?).

acer

Why are all those procs being created inside the counter loop? That means that Maple has to recreate the actual procedure bodies with each interation throught the loop, which inefficient.

Also, when there are that many global variables in all the procs, that is another sign that something is amiss.

acer

Why are all those procs being created inside the counter loop? That means that Maple has to recreate the actual procedure bodies with each interation throught the loop, which inefficient.

Also, when there are that many global variables in all the procs, that is another sign that something is amiss.

acer

Why would you want the routine to be more simple, for its own sake? For these kinds of routines, it can pay off to take the time and make it super efficient and super fast, even if that does't make it simple.

I suspect that the following internal routines, which ImageTools:-Rotate uses to do the actual work, could be made more efficient. I'm looking at these "helper" routines used by Rotate.

kernelopts(opaquemodules=false):
eval(ImageTools:-rotate:-rotate_90);
eval(ImageTools:-flip:-flipHorz);
eval(ImageTools:-flip:-flipVertInplace);

They generally have double (nested, one to the number of layers and one to the widht) loops with calls to ArrayTools:-Copy at the inside. External calls are expensive enough that this is not best. It would likely be more efficient to put all that in C instead (where function calls are much less expensive) . Alternatively, it could all be rewritten as procs with option autocompile and/or option hfloat or be made evalhf'able. I guess that they can't easily be passed to the Compiler with ArrayTools calls which are call_external. What I'm trying to say is this: ArrayTools:-Copy is a compiled routine, sure, and that is fast. But simply calling ArrayTools:-Copy many times (for each column, for each layer) is in itself a cost that accumulates and which could be avoided by instead having the entire routine be compiled.

In other words, if you want best performance then make it all run (somehow) as compiled code. Avoid Maple function calls.

The ImageTools:-rotate:-rotate_180 works by doing both a flip and a flop. That cost might be cut in half, by doing the rewrite directly. But perhaps that could only use fast vendor BLAS if the dcopy admits a negative increment argument, and so a single straight piece of O(n^2) C code might be slower than twice the work done using cache-tuned BLAS. You'd have to benchmark it.

To it's credit, the ImageTools:-Rotate routine does allow inplace operation on an optional output argument (as I described above) and thus allows for no unnecessary garbage production. That is a very good thing, because garbage generation and collection (...like Maple function calls)  is another big potential efficiency hit.

acer

Why would you want the routine to be more simple, for its own sake? For these kinds of routines, it can pay off to take the time and make it super efficient and super fast, even if that does't make it simple.

I suspect that the following internal routines, which ImageTools:-Rotate uses to do the actual work, could be made more efficient. I'm looking at these "helper" routines used by Rotate.

kernelopts(opaquemodules=false):
eval(ImageTools:-rotate:-rotate_90);
eval(ImageTools:-flip:-flipHorz);
eval(ImageTools:-flip:-flipVertInplace);

They generally have double (nested, one to the number of layers and one to the widht) loops with calls to ArrayTools:-Copy at the inside. External calls are expensive enough that this is not best. It would likely be more efficient to put all that in C instead (where function calls are much less expensive) . Alternatively, it could all be rewritten as procs with option autocompile and/or option hfloat or be made evalhf'able. I guess that they can't easily be passed to the Compiler with ArrayTools calls which are call_external. What I'm trying to say is this: ArrayTools:-Copy is a compiled routine, sure, and that is fast. But simply calling ArrayTools:-Copy many times (for each column, for each layer) is in itself a cost that accumulates and which could be avoided by instead having the entire routine be compiled.

In other words, if you want best performance then make it all run (somehow) as compiled code. Avoid Maple function calls.

The ImageTools:-rotate:-rotate_180 works by doing both a flip and a flop. That cost might be cut in half, by doing the rewrite directly. But perhaps that could only use fast vendor BLAS if the dcopy admits a negative increment argument, and so a single straight piece of O(n^2) C code might be slower than twice the work done using cache-tuned BLAS. You'd have to benchmark it.

To it's credit, the ImageTools:-Rotate routine does allow inplace operation on an optional output argument (as I described above) and thus allows for no unnecessary garbage production. That is a very good thing, because garbage generation and collection (...like Maple function calls)  is another big potential efficiency hit.

acer

I do not know the exact answer, offhand.

But consider that Maple does things in base 10, and for arithemetic and trig (and some other common special functions) it gets accuracy (in base 10 ulps) also in terms of base 10 digits. So a floating-point library which acts at arbitrary precision base 2 is going to behave differently.

For Maple to get the same radix 10 accuracy as at present, but somehow utilizing mpfr, it would have to work out the corresponding/equivalent precision/accuracy/guards in radix 2 as it needed for radix 10, or vice versa, for each atomic computation. Maybe I don't know what I'm talking about.

The latest timing comparison at that site that I see is for mpfr v2.4.0 (though the latest seems to be v2.4.2). And that is run compared against Maple 12. Older timings are also there (eg. here against Maple 9.01). So one can see something about how its performance has changed, relative to the competition.

It's diffcult to see immediately exactly how Maple might have improved versus itself since the timings are reported as run on different machines. Of course, if you wanted to see how Maple had progressed, you could run the Maple source of their timing comparison on your same machine, against various releases.

acer

He asked twice. And you answered in the other thread. See here.

The answer is yes, it can be done in Maple.

acer

Sorry, I typed that wrongly. It's a thread in the group sci.math.symbolic, not comp.soft-sys.math.maple.

But the links should still work correctly.

acer

I believe that I can say, based on these replies, that the current justification of this particular parser difference is not easy (for any of us) to remember. That is telling.

But, also, cannot the 2D Math parser be made nicer here? The posted example is about pasting plaintext 1D input into Standard when in 2D input mode. In 1D Maple notation, the backslash does not mean set-minus. So why cannot the 2D Math parser recognize it as having its 1D Maple notation meaning and parse it appropriately? Why should it parse it using the 2D Math syntax? Why can't the GUI & 2D Math parse it in the same way that the 1D parser does, and so for this example result in obtaining a concatenated long line?

There are lots of examples where the 2D Math parser will interpret pasted 1D input in a special way, and adjust accordingly. Here is one nice example.

a/b/c;

If the above code is pasted in, while in 2D Math input mode, it gets parsed like a*(1/b)*(1/c). In other words, it gets parsed quite differently than if one types those symbols in by hand. So the 2D Math parser is quite capable of distinguishing pasted 1D plain text.

Why couldn't the 2D Math parser be taught how to handle backslash in pasted 1D input as the line-continuation character (when valid, and not as an escape symbol inside a string)?

acer

I agree wholeheartedly, Joe, that information such as this should be in the help system. But it's lacking.

I'd like to see new help-pages showing usage of the various types of procedure parameters. But it should not be the existing ?parameter_classes page, which like many help-pages reads as drily as a unix man-page. Those existing set of pages describing procedures are also much more about writing than actually using procedures.

There is the ?colondash help-page, but I can hardly bear to look at it, since it absurdly tries to explain away using indexed package[member] syntax when package:-member is so obviously more often superior.

I'd like to see a whole set of different new help-pages, for each class of parameter. And then each parameter in the calling sequence of each routine's help-page could itself be a link directly to the relevant usage page.

acer

I agree wholeheartedly, Joe, that information such as this should be in the help system. But it's lacking.

I'd like to see new help-pages showing usage of the various types of procedure parameters. But it should not be the existing ?parameter_classes page, which like many help-pages reads as drily as a unix man-page. Those existing set of pages describing procedures are also much more about writing than actually using procedures.

There is the ?colondash help-page, but I can hardly bear to look at it, since it absurdly tries to explain away using indexed package[member] syntax when package:-member is so obviously more often superior.

I'd like to see a whole set of different new help-pages, for each class of parameter. And then each parameter in the calling sequence of each routine's help-page could itself be a link directly to the relevant usage page.

acer

Look at the help-page for the DiscreteTransforms package ( ?DiscreteTransforms ) in your Maple 11.

I find it unhelpful that the Online Help pages do not mention the version in which the command or package was introduced (as do the individual online help pages of some other major comercial math products.) It is far more difficult than it ought to be to discover that DiscreteTransforms was introduced in Maple 9. See here.

acer

Look at the help-page for the DiscreteTransforms package ( ?DiscreteTransforms ) in your Maple 11.

I find it unhelpful that the Online Help pages do not mention the version in which the command or package was introduced (as do the individual online help pages of some other major comercial math products.) It is far more difficult than it ought to be to discover that DiscreteTransforms was introduced in Maple 9. See here.

acer

You don't have it quite right. Yes, the Mathworks' own symbolic toolbox now uses MuPAD as the engine. But Maplesoft's current Maple Toolbox for Matlab (MTM) is a drop-in replacement, and one can configure Matlab to use either as the symbolic engine.

I don't have proof, but I'd be surprised if the form & strength of the symbolic toolbox in Matlab did not change when the underlying default engine was switched. I'd be surprised if some users' worksheets didn't start producing new and slightly different results (which might be a problem...). Using the MTM as the symbolic engine might mitigate such differences (but I have no examples).

See here for more info, which mentions that MTM is being offered free for owners of current Maple.

acer

As fascinating as the topic of rationale for personal choice of Maple interface is, perhaps it could be started in a new blog or forum topic? I feel confident that other people might have things to add on that. But it doesn't really pertain to 1D vs 2D parsing.

cheers,
acer

First 468 469 470 471 472 473 474 Last Page 470 of 592