JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 83 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

There is a fundamental misunderstanding about the design of RealRange (which is not particularly helped by the documentation).  RealRange is not a set, or even a representation of a set.  RealRange is a property -- which can represent many things, at the simplest it is just set-membership.  So the simplifications that RealRange does above are exactly correct (5 is interpreted as the 'singleton' property which corresponds to equality to 5).

The real problem, which I have talked about before (see here, and better, here) is that the distinction between types and properties is rather subtle (and not always taught properly to even Maple insiders, who also sometimes get it wrong).

There is clearly a missing simplification (since x=2 implies x<>1) in solve's operation.  As you well know, in your example that may not seem like a big deal, but for larger problems, this can definitely cause "blow up" of expression sizes for no good reason.  If one worked hard enough, it might even be possible to spin such inconsistencies into bugs.

Too bad Robert did not also post the answer here.  But in any case, my method is general, although fairly specialized -- I don't think it would give answers that int cannot get itself for problems where the underlying LODE is of order < 4.

I wonder if there is a way to classify all such problems (ie definite integral problems for integrands with special functions where the underlying LODE is of order 4 but splits into 2 solvable order 2 LODEs).  Might be a rather fun exercise.

An additional problem with these magic numbers is that their size were set a very very very long time ago.  Computers have changed a lot since, but these have not been revisited.  Whatever criteria were used for setting these has likely been forgotten, so I guess no one even knows how to go about adapting them to modern hardware capabilities.

The code (in the kernel) which figures out which routine to call, and what is the 'name' of the routine are both quite large, and quite separate.  I am not surprised that they do not quite agree, but this is the first time I see an actual case where this difference is clearly visible.  That it is done with indexed names and a builtin function is not surprising, as the semantics of that is mightily complicated (read: lots of weird special cases).

Fixing this bug without breaking something else which relied on this weird behaviour might be tricky.

I would not call it 'standard', but it is definitely known amongst the experts.  It is the basis of a lot of recent work in computer algebra (see some of the papers by Joris van der Hoeven as well as work by James Davenport).

It is definitely one of my favourite techniques.  Plus, it tends to show differences between PREtools[dpolyform] and gfun[holexprtodiffeq], which often lead to improvements by their respective authors.

I would not call it 'standard', but it is definitely known amongst the experts.  It is the basis of a lot of recent work in computer algebra (see some of the papers by Joris van der Hoeven as well as work by James Davenport).

It is definitely one of my favourite techniques.  Plus, it tends to show differences between PREtools[dpolyform] and gfun[holexprtodiffeq], which often lead to improvements by their respective authors.

primeCol := proc(m::posint, n::posint) Vector(select(isprime, [seq(m .. n)])); end proc

primeCol := proc(m::posint, n::posint) Vector(select(isprime, [seq(m .. n)])); end proc

The answer is 'valid' in the sense that it has all the right limits.  Very often, for definite integrals with singularities, Maple's answer should be interpreted as being generically correct, and specializations should always be done via limit rather than pointwise evaluation.  Furthermore, the resulting closed-form answer also needs to be interpreted as the analytic continuation of the original integral, so that the answer is often defined over a much larger domain than the original problem.

Now, whether this is the best design, or what users expect, is an entirely different matter.  But it is something that should be a lot prominent in the documentation, viz that pointwise evaluation is not the 'best' way to interpret the answers from int.

The answer is 'valid' in the sense that it has all the right limits.  Very often, for definite integrals with singularities, Maple's answer should be interpreted as being generically correct, and specializations should always be done via limit rather than pointwise evaluation.  Furthermore, the resulting closed-form answer also needs to be interpreted as the analytic continuation of the original integral, so that the answer is often defined over a much larger domain than the original problem.

Now, whether this is the best design, or what users expect, is an entirely different matter.  But it is something that should be a lot prominent in the documentation, viz that pointwise evaluation is not the 'best' way to interpret the answers from int.

First, what you posted has various syntax errors.  If corrected, we get that the line

Generate(list(rational(range=-1..1)^2+rational(range=-1..1)^2,k))

is the problem, essentially because of Maple's automatic simplifications. The issues is that list(rational(range=-1..1)^2+rational(range=-1..1)^2 automatically simplifies to 2*list(rational(range=-1..1)^2, which clearly means something entirely different! It is simplest to generate 2 lists, then add their squares rather than to try to fool Maple into not applying auto-simplification.

I would definitely report this directly to Tech Support (and file a bug report).  I was about to switch to AVG 8 too, and now I am going to hold off.  This might affect a number of Maple users.

Worse, it might affect Maplesoft in some US States in non-trivial ways.  [The issue is that, AFAIK, some US states have laws about accessibility of software in use at state universities; most of the 'viewers' for the visually impaired work fine with Classic, but there were difficulties with Standard.  So if Classic stops working because of AVG 8, that might mean that, at some schools, that is the equivalent of Maple now not being 'legal software' anymore.  So, unless the problems with viewers + Standard have been fixed, this could be a giant headache].

The problem with Java is not that it is strongly typed, it is that it is syntactically cluttered.  [This is why IDEs like Eclipse are so useful, because the syntactic clutter of Java is so incredibly predictable that a program can fill it in for you]. 

Maple is definitely better than Java in that regard.  You would probably like Python as well.

But if you want to see sheer elegance of syntax and yet have even stronger types than Java, take a look at Haskell.  Be careful though - Haskell is a life-changing experience, and most programmers who experience it properly find that they can't go back to their 'old' general-purpose languages.  The main reason I still use Maple is that it is NOT a GPL, but rather a very special-purpose language that suits its task rather well.

Apparently my notion of "recently" has been evolving over the years!  I guess I knew it was not so recent, and I knew that I disliked this change intensely, but never had anything but my own opinion to offer for my dislike.  So when I saw a UI expert clearly articulate a principle that was violated by the 'new' CM design, I finally thought that I had objective grounds on which to complain.

I did talk to the author of the new CM; the basic conclusion was that he was the programmer, not really the designer, and that various of these design decisions were given to him as constraints.

And of course it slowed down!  The design of the previous CM was extremely careful in this regard [even though a couple of bugs slipped in].  We knew, from the UI literature, that anything above 200 msec (ie .2 sec) was too slow for a menu.  In fact, around 100 msec is considered 'acceptable.  And this is a flat number, it does not depend on the 'size' of the underlying object.  In Maple, the only way to get things to be that fast is to do structural analyses of objects [via op] and eschew all commands that do expensive traversals.  Then only compromise was the use of 'indets', which is where all the bad slowndowns were [many of which were fixed by making indets smarter, which benefited the rest of Maple too, which in my mind is always the 'right' fix].  But any kind of analysis that requires multiple traversals, or worse, tree traversals instead of DAG traversals, were doomed to fail.  So the old CM code was designed around the best structural analyses we could do that would return 'useful' information.  So, 7 years ago, on machines many times slower than today's, CM came up essentially instantaneously.  Now, on fast machines, it is sluggish.  [All of this is about the library-side code].

Another part of the design was to have maximally useful items show up on CM.  We knew, from the same literature, that finding an item on a menu is linear in the distance from the root of the menu; basically, if a menu is too long, the things at the bottom will be more aggravating to use than the overall benefit of the CM.  So things were (subjectively!) ordered from "most often useful" to "of use, but less often" and everything that was only 'potentially' useful was kept off the menu.  It was not an issue of 'building the menu' speed, it was an issue of cognitive efficiency for humans that was the driving concern.

As far as CM items that are "wrong", that is a much more difficult problem.  Some of them are "very rarely useful", so are more clutter than anything else.  Some of them seem to be there purely for marketing reasons.  Others are downright frightening: giant words that are difficult to parse are used where simple words would have done better.  3D plot is much superior, from a recognition and cognitive load perspective, than 'visualization'.  While I am sure I could find 1 or 2 items which are "wrong", the real problem is much deeper.  If there is agreement that the current design suffers from deeper problems (ie too slow because it uses analysis methods that are too expansive, and disobeys very standard UI guidelines on cognitive load), then I can help with some concrete suggestions.

First 17 18 19 20 21 22 23 Last Page 19 of 119