I suggest a Linux operating system using architecture compatible with the x86-64 chipset.
In particular I would suggest the Athlon64 X2 Socket AM2 running the a 64bit SuSE Linux distribution (eg 10.1). The AMD opteron is also a nice choice, and allows for SMP, though it is more expensive. If you can wait a little while, you could consider the AMD quad-core opteron.
Some advantages of such a setup are:
- remote login to run maple is easy and reliable
- ATLAS BLAS are well tuned for this in Maple
- GnuMP (gmp) now has some assembler in it for this architecture
- Can install 64bit OS including 32bit runtime, after which can install both 64bit and 32bit Maple
Make sure that you can run a version of Maple that supports the GlobalOptimization toolbox. You may choose to try it at some later date, say to optimize parameters in some Simulink model. It is supported on 32bit Linux.
The Core2 Duo is attractive, but it's not clear that Maple is quite as well optimized for it yet.
I know that I run the risk of being flamed, but I advise against OSX. Running remote X-apps is more work, it doesn't support Maple's Classic interface, and judging by reports here and on usenet it runs into more Maple 'issues' than does Linux.
Apart from having reasonably good BLAS from the Intel Math Kernel Library (MKL) I see few if any reasons to run Maple on 32bit Windows for serious computaton. The remote access of Linux/UNIX is hard to beat. Multiple instances of Maple will run on a Linux machine without its blinking an eye -- something that I believe is hardly true of Windows.
acer
I notice that .mla and .hdb are not present on that site.
acer
Thanks. As it happens, I had already read those help-pages, but I appreciate the comment.
I have not found this evaluation behaviour of tables under discussion to be documented in any help-page. The quote that I included above actually describes only the opposite behaviour for other types. It ought to be more clearly documented.
I have not checked the manuals on this point. I know that there is quite a bit inside the Advanced Programming Guide which cannot be found in any help-page. I feel that is another more general problem which ideally should be corrected.
acer
Perhaps the (documented) type with_unit could serve instead of has_unit.
For example,
> a := Unit('m'), 4*Unit('cm'), 9*Unit('ft'), 16*Unit('N')/Unit('cm'):
> op(map(subsindets, [a], with_unit, z->convert(combine(z,units),unit_free) ));
1, 1/25, 3429/1250, 1600
Or, it might be tried in Joe's nice UnitsToUnity routine.
acer
Perhaps the (documented) type with_unit could serve instead of has_unit.
For example,
> a := Unit('m'), 4*Unit('cm'), 9*Unit('ft'), 16*Unit('N')/Unit('cm'):
> op(map(subsindets, [a], with_unit, z->convert(combine(z,units),unit_free) ));
1, 1/25, 3429/1250, 1600
Or, it might be tried in Joe's nice UnitsToUnity routine.
acer
I heard that Mma 6 is using Qt for its GUI on Linux/Unix. If so, then I wonder whether it's very snappy.
acer
I didn't mean to imply that it was necessarily different from all other objects. But it's natural to imagine that someone would want to use as a data structure an object which would not suffer from unwanted evaluations.
Prior to kernel-based parameter processing (which is a very recent development) there were a slew of ways in which to inadvertantly get unwanted evaluations of data or parameter options. Consider optional keyword parameters: what else but a string could be used as such a keyword, and not be at risk of extra evaluation? How to make a keyword an unassignable name? Look at the sorts of evaluation that typical use of ProcessOptions makes. I was thinking about ways to pass about data and shield it from unwanted evaluation.
So far, the rtable looks not so bad.
acer
Of course, yes, backwards compatibility is very important, so the behaviour cannot be changed now.
But perhaps the documentation could be improved? Or am I just not finding it? On the help-page ?eval there is only this below, but no mention of this greater-than-1-level evaluation behaviour for entries of table parameters within procedures.
"For example, if x := y and y := 1 in a Maple session, what is the value of x; ? In an interactive session where x and y are global variables, x would evaluate to 1 and we would say that x is ``fully evaluated''. For one-level evaluation, we would use the command eval(x, 1) which would in this case yield y. However, inside a Maple procedure, if x is a local variable or a parameter, then x evaluates to y and we would say x evaluated ``one level''."
So, in the above, there is no mention of this issue.
This 2-level evaluation rule for last-name-eval parameters like table also isn't mentioned on ?updates,v40 or ?lastnameevaluation , that I could see.
Assigning the table to a local, in the procedure that accesses the entries, is possible then, to avoid the extra evaluation. Thank you for that. It would be nice to use rtables instead, but one of the great things about a table is that it may be resized at any time. That's not possible with rtables, so it's difficult to be efficient with them when the final data set size is not known at initial creation time. The other great thing about tables is that they may be indexed by things other than integers, but that's not so for the rtable.
It would be nice to have something like a table, for resizability and ability to be indexed more loosely, but without last-name-eval, and whose entries only evaluate 1-level on access when it is a procedure parameter or a local.
acer
Thanks very much for the response.
My example's array parameter was actually assigned to a local (y), but only in the outer procedure. I suppose you are saying that, in any deeper nesting of procedure calls, the table parameter would again have to be assigned to a local? That is, assigned to a local, in each inner procedure in which 1-level eval was wanted? That seems onerous.
I still don't see why the level of evaluation for the elements of a table are deemed correct. Running an example with a list, instead of a table, produces exp(0) even from inside the inner procedure. I would claim that the level of evaluation of the entry of the table -- over and above what is needed to accomodate last-name-eval, is wrong. It would be right, were it to produce the same result as occurs when accessing the entry as happens in the list case.
Even if one accepts the rationale (which I don't, sorry) it still seems like an overly expensive hack, to get around the fact that last-name-eval tables don't get their contents fully evaluated when first passed in from the top-level. It means that extra evaluation is done upon each and every subsequent access of each table entry, instead of just once per entry up front.
Wouldn't it be better to allow the programmer to choose whether to evaluate the table entries fully (just the once up front, or...)? I can see that it's tricky, of course. Suppose one wants to definitely not fully evaluate all the entries, and that one also wants somehow to get the level of evaluation that you described of some particular table entry. A mechanism for that is desirable. Having such a mechanism always take place is less desirable, although that is the current state of affairs -- excluding hacks to get around hacks.
Those many test failures that might occur when changing the evaluation rule for table parameters, they occur presumably because code was programmed to work around the current behaviour. Such test failures can't be much of a justification, in and of themselves. But the behaviour still seems hackish, and makes Maple's evaluation rules more complicated. I wouldn't know where to find them in the help-pages, other than ?updates,v40 .
acer
You should be able to set the cutoff size below which Matrices get all their entries printed. For example,
interface(rtablesize=15);
Also, you might be able to increase the working precision of the code, for more accurate results. You could try, for example,
Digits := trunc(evalhf(Digits));
at the start of the code, to set the working precision to just under the level that still allows some hardware double precision in the computations. But be warned that as Digits increases so too may the execution time and memory allocation.
acer
You should be able to set the cutoff size below which Matrices get all their entries printed. For example,
interface(rtablesize=15);
Also, you might be able to increase the working precision of the code, for more accurate results. You could try, for example,
Digits := trunc(evalhf(Digits));
at the start of the code, to set the working precision to just under the level that still allows some hardware double precision in the computations. But be warned that as Digits increases so too may the execution time and memory allocation.
acer
Thanks for the explanation. I did know the 'scalar' example behaviour, and expected it yes.
But this table case, which disobeys evaluation rules for local variables, this bothers me. It's a bug, I don't see the behaviour documented anywhere as a special case, and it can cause problems and catch the unwary. I would bet that Maple's own library routines are not all prepared to work around this case, too.
I have not yet been able to imagine situations in which this 'hack' would be necessary. I wonder how extensive could be such situations (and whether they are so crucial to make the reported bug 'worth it').
acer
A few things could be mentioned, about
Linbox (which I assume you're citing by referring to Zhendong Wan's webpage) and Maple.
Some LinearAlgebra routines, such as Determinant and CharacteristicPolynomial, are more efficienct in Maple 11 on integer datatype dense Matrices, through internal use of LinearAlgebra[Modular]. See the help-page, ?updates,Maple11,efficiency for more details.
For example, in Maple 9.5.1 Determinant of a 200x200 datatype=integer Matrix takes 27sec and allocates 27MB on my machine. In Maple 10 it takes 13sec and allocates 46MB. But in Maple 11 it takes 1.4sec and allocates only 4.5MB.
So, some inroads have begun to be made, for dense exact integer cases, and this relative performance comparison
chart is a bit out of date.
Another thing to notice about LinBox is that there are a great many people listed as contributors. It took a few people three years before they produced their first "stable" 1.0 version, and two more years to get their current version 1.1 performance.
acer
Unless the datatype is a hardware type, or a software float type, the underlying data structure for a storage=sparse rtable is, I believe, a Maple sparse table.
If instead, for datatype=, the data structure were similar to that used for those hardware or float datatypes, then some start might be made for sparse exact linear algebra. I mean storage in a triple of C arrays: two as integer arrays for the row and column indices and one, well, one for say ALGEB pointers.
There'd still be a lot of work to do, but this might be a start.
I wanted to suggest this for datatype=rational, but that could mean a lot of work up front for somebody, to make sure that such rtables continue to work as usual all through Maple. Maybe it would be "easier" if it were done for some new combination, like storage=sparse,datatype=algeb. Gosh, I don't know.
It's also good to be realistic. Sparse exact linear algebra could be a good addition for Maple as well as be lower down a great many people's priority lists. If that's true, then maybe someone other than Maplesoft might try to do it. Most if not all the knowledge to do it are in the external-calling details in the advanced programming guide.
acer
Thanks very much, Axel, for the explanation.
The timings in the worksheet indicate that your maple-language implementations, the translations, might be 4-5 times faster than Maple's own when run outside of evalhf, is that right? And far more importantly, your implementations are evalhf'able!
The BesselK1 implementation seems to be quite accurate, except perhaps for very small arguments, judging by the graph.
I find this to be very exciting.
acer