[Replying as a separate item, as I did not want to mix up too many points at once]
You give a good description of the fundamental sources of non-determinism in Maple, and the reasoning behind most of it. This should be put into an (advanced) FAQ.
However, Maple also has some pointless non-determinism: for example one can call 'lcoeff' without specifying any variables. So maple picks an ordering for you (at random, ie using memory addressing). I really can't think of any sensible algorithm which both needs the 'leading coefficient' and does not need that to be well-defined. This use of 'lcoeff' (and its friends) introduces pointless non-determinism. Worse, 'normal' internally uses exactly this flavour of lcoeff -- so that between 'restart', normal(1/(a-b)) will sometimes stay as is, and sometimes pull out a -1. I claim that 'unit normal' for multivariate ratpolys is ill-defined if you do not have a variable ordering. In any case, having normal pull out a unit from the denominator does not, in fact, help; worse, it makes things like sqrt(1/(1-x)) very difficult to 'preserve', since most operations will switch it to sqrt(-1/(x-1)), which does not have the same branch cut! One can thus see this 'unit normal' action as being akin to remnants of the square-root bug (where Maple used to automatically simplify (x^2)^(1/2) to x, no matter what x was). Anyways, this particular example is an actual source of bugs in int (via limit and series), which are pretty much unfixable without resorting to fragile hacks.
[Replying as a separate item, as I did not want to mix up too many points at once]
You give a good description of the fundamental sources of non-determinism in Maple, and the reasoning behind most of it. This should be put into an (advanced) FAQ.
However, Maple also has some pointless non-determinism: for example one can call 'lcoeff' without specifying any variables. So maple picks an ordering for you (at random, ie using memory addressing). I really can't think of any sensible algorithm which both needs the 'leading coefficient' and does not need that to be well-defined. This use of 'lcoeff' (and its friends) introduces pointless non-determinism. Worse, 'normal' internally uses exactly this flavour of lcoeff -- so that between 'restart', normal(1/(a-b)) will sometimes stay as is, and sometimes pull out a -1. I claim that 'unit normal' for multivariate ratpolys is ill-defined if you do not have a variable ordering. In any case, having normal pull out a unit from the denominator does not, in fact, help; worse, it makes things like sqrt(1/(1-x)) very difficult to 'preserve', since most operations will switch it to sqrt(-1/(x-1)), which does not have the same branch cut! One can thus see this 'unit normal' action as being akin to remnants of the square-root bug (where Maple used to automatically simplify (x^2)^(1/2) to x, no matter what x was). Anyways, this particular example is an actual source of bugs in int (via limit and series), which are pretty much unfixable without resorting to fragile hacks.
What you have missed is that it is possible to build a hybrid system. In fact, Axiom is such a system: its top-level is Maple-like, while its innards are most definitely Magma-like (but the Axiom people would claim precedence ;-) ).
I completely agree with you that for a general-purpose CAS, user entry has to be free-form, like Maple. Otherwise it is much too difficult to 'experiment' (as any time spent with any kind of highly statically typed system shows). The downside is that you have no confidence whatsoever of 'correctness', because you have no control over the interpretation that the system makes of your input. For an example, Maple still has a big problem between interpreting symbols are complex (or real) as necessary: when the default was real, that caused all sorts of bugs, now the default is complex, the system computes fewer answers (with fewer bugs). And I could go on with many explicit examples.
I believe that there is a bright future for a system built on top of a Magma-like engine, but with a Maple-like interface. This would mean that serious programmers would have to learn 2 languages: the top-level scripting language, and the lower-level base programming language. But, to me, that is a small price to pay to maintain usability and yet increase reliability and efficiency.
If you want proof, look at the speed gains for (numerical) linear algebra, both over floats and Z_p. The 'low level' API for those facilities is highly typed (and fast); there is also a very Maple-friendly dynamically typed top-level syntax (slower, but convenient).
What you have missed is that it is possible to build a hybrid system. In fact, Axiom is such a system: its top-level is Maple-like, while its innards are most definitely Magma-like (but the Axiom people would claim precedence ;-) ).
I completely agree with you that for a general-purpose CAS, user entry has to be free-form, like Maple. Otherwise it is much too difficult to 'experiment' (as any time spent with any kind of highly statically typed system shows). The downside is that you have no confidence whatsoever of 'correctness', because you have no control over the interpretation that the system makes of your input. For an example, Maple still has a big problem between interpreting symbols are complex (or real) as necessary: when the default was real, that caused all sorts of bugs, now the default is complex, the system computes fewer answers (with fewer bugs). And I could go on with many explicit examples.
I believe that there is a bright future for a system built on top of a Magma-like engine, but with a Maple-like interface. This would mean that serious programmers would have to learn 2 languages: the top-level scripting language, and the lower-level base programming language. But, to me, that is a small price to pay to maintain usability and yet increase reliability and efficiency.
If you want proof, look at the speed gains for (numerical) linear algebra, both over floats and Z_p. The 'low level' API for those facilities is highly typed (and fast); there is also a very Maple-friendly dynamically typed top-level syntax (slower, but convenient).
is that this was ``fixed'' for Vista (as Tim points out, it had to be), but whoever fixed it was thinking single-user and not network when they did it. It just feels like an oversight rather than an explicit design. 11.02?
We have not heard from DJ Clayworth in a while, he could probably enlighten us on this issue.
It seems to me that if print works and printf doesn't, that shows that there is a bug remaining to be fixed. The behaviour of these two commands should not be drastically different.
Nice to know the workaround exists though (other than using worksheet mode that is).
plotting has mostly evolved organically: whenever people needed new functionality, it was added piecemeal. Once in a while, some rationalization was done (look in the What's New over the years, and you'll see traces of that every 3 releases or so). But really concerted requirements analysis does not seem to have been done [or at least, the results of which are not apparent to the users].
Generally, this is an easy task, as it does not need to be done ab initio anymore: a quick perusal of the plotting capabilities of MathCAD, SPSS and R are more than sufficient to give a solid set of necessary plot types (stats packages tend to be several years ahead of other mathematical products as far as plotting diversity is concerned, but they also tend to be much less concerned with interactivity [last time I looked]). Taking a look at Excel as well is a good idea.
plotting has mostly evolved organically: whenever people needed new functionality, it was added piecemeal. Once in a while, some rationalization was done (look in the What's New over the years, and you'll see traces of that every 3 releases or so). But really concerted requirements analysis does not seem to have been done [or at least, the results of which are not apparent to the users].
Generally, this is an easy task, as it does not need to be done ab initio anymore: a quick perusal of the plotting capabilities of MathCAD, SPSS and R are more than sufficient to give a solid set of necessary plot types (stats packages tend to be several years ahead of other mathematical products as far as plotting diversity is concerned, but they also tend to be much less concerned with interactivity [last time I looked]). Taking a look at Excel as well is a good idea.
What is puzzling to Schivnorr (and to me!) is that literal strings seem to have different interpretations depending on which mode they have been entered in. Since strings are usually understood as being 'atomic', it is rather unexpected that the method by which they are entered matters!
Basically, this reinforces the issue that the Standard interface has 2 very incompatible modes of entry. What you learn in one mode about 'maple' is not portable to the other mode (and vice-versa). So the user really has to learn 2 related but subtly different languages: GUI Maple and the traditional Maple language.
For someone who wants to use Maple has a powerful glorified calculator, then learning GUI Maple is great (some people have really raved about it here already). But if you think you will ever want to program Maple, then you have the conundrum of learning 2 languages, whose subtle differences are not documented anywhere.
The GUI was really new for 9.5, and the number of bugs in it was ghastly. To the point that it was a real shame that they released it at all.
Maple 11 is much much better. Not bug free, but neither is it 'dangerous to use' (as far as getting some work done).
If you are on Windows or Linux, I suggest using Classic instead. If you are on a Mac, your only choice is to upgrade.
Is not nearly as full of details as one would like, is it? It seems to indicate that all values of the parameters (_Z* and _B*) should be solutions. So in this case, 0 is indeed included. Which is a bug when x<>0 is explicitly included in the system!
Is not nearly as full of details as one would like, is it? It seems to indicate that all values of the parameters (_Z* and _B*) should be solutions. So in this case, 0 is indeed included. Which is a bug when x<>0 is explicitly included in the system!
What you say is true... but not relevant to the original poster's problem :-(
When solve outputs variables with underscores, it is _Z1, _NN2, _B4, etc. Each is numbered.
Here the 'culprit', as it were, is RootOf. A RootOf represents a 'root of' an equation. So RootOf(_Z^15-12) represents a root of x^15-12 (or y^15-12 or tt^15-12). The point is that the 'variable' used in the RootOf does not matter, so the convention is to reserve _Z for this 'dummy' variable.
What you say is true... but not relevant to the original poster's problem :-(
When solve outputs variables with underscores, it is _Z1, _NN2, _B4, etc. Each is numbered.
Here the 'culprit', as it were, is RootOf. A RootOf represents a 'root of' an equation. So RootOf(_Z^15-12) represents a root of x^15-12 (or y^15-12 or tt^15-12). The point is that the 'variable' used in the RootOf does not matter, so the convention is to reserve _Z for this 'dummy' variable.
For _Z1=0, _B1=0, you get x=0. So something is a little weird -- I guess I need to go re-read the help page on solve to figure out what it really means with those _B's!