JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 83 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

One of the under-appreciated aspects of patmatch are its capabilities of doing conditional matching, and thus conditional rewriting with applyrule.  So taking Joe's basic idea, one can encode this as

seq(applyrule(conditional(exp((n::imaginary(fraction))*Pi), _type(5*n/I,integer))=alpha^(5/I*n),exp(i/5*I*Pi)),i=1..4);

which also works.  But one can still see that Maple's internal representation for objects peeks through [there are fewer than 25(?) people in the world who would have been able to figure out the need for imaginary(fraction) here].  And note the use of the underscore in _type!  match suffers less from this particular disease, but there is no convenient wrapper like applyrule that goes along with it. Of course, match heavily relies on solve, so your mileage may vary.

Some people agreed with you - thus 'match', 'define' and 'patmatch', 3 different attempts at features for rule-defined programming, with different strengths and weaknesses.  And no follow-up.

The problem is that "pattern-matching" in Computer Algebra has, legitimately, acquired a bad name as an implementation strategy when you don't know what you are doing.  Rather the same as when people use simulated annealing or neural networks to ``solve'' a problem when they have no idea what to do.  Solutions to CA problems using pattern-matching tend to be very fragile.

The fallacy with this reasoning is the thought that the problem is pattern-matching.  Pattern-matching can be very successfully used when you have predictable complete matching -- see Ocaml and Haskell for great uses of pattern-matching as convenient syntactic sugar for what, in Maple, usually requires some ugly op'ing around. 

The problem is the attempt to solve hard problems like integration or solving differential equations via pattern-matching is essentially ``giving up'' using an intelligent solution and applying a hack that will patch over the problem.  It gets worse when you have giant tables (1000s of entries,like in Mathematica or in Maple's own inttrans package).  The maintenance of these is hideous. Plus these tables often hide bugs in downstream code, or worse, have a bug in them where downstream algorithmic (but often slower) code would have gotten the right answer!

 

This is something I have noticed many times now: large structured systems are often solvable, but only when you use a solution method that is well-tuned to that structure.  Well, phrased that way, that sounds obvious doesn't it?

But that observation is non-trivial from a software construction point of view.  This is exactly the way dsolve (with a d, ie for ODEs) works: analyse the input carefully, then from the structure decide which solver(s) have the best chance at solving the problem.  dsolve goes one further: it even tries to (heuristically) order the resulting set of solvers.  solve does do some pre-analysis too, but compared to dsolve, it is rather naive.  The interesting part about dsolve is how the pre-analysis phase is decoupled from the solving phase.  Other pieces, like the modern 'int' in IntegrationTools, use an eager approach: it has a list of (guard, algorithm) pairs which are globally ordered, and tries them in order.  If the guard is true, then the algorithm is tried immediately; if it works, the answer is returned, otherwise the next pair is tried.  simplify sometimes goes one step further: it applies a bunch of different heuristics separately and then checks after the fact which one was most successful and returns that answer.  That implies that more work than strictly necessary is done, but it also means that the results are much better.

One could very easily write a whole book on the software engineering challenges (and current solutions) of a Computer Algebra system.  There are already excellent books on the underlying algorithms out there, which is the part that the computer algebra community seems to most care about, but essentially nothing about the engineering.  It is very sad because 'modern' systems often get written with very naive architectures because it takes years of experience to understand (and re-invent!) what the long-time developers of Maple (and Macsyma and Mathematica and Axiom and ...) have had to learn the hard way.

 

This is something I have noticed many times now: large structured systems are often solvable, but only when you use a solution method that is well-tuned to that structure.  Well, phrased that way, that sounds obvious doesn't it?

But that observation is non-trivial from a software construction point of view.  This is exactly the way dsolve (with a d, ie for ODEs) works: analyse the input carefully, then from the structure decide which solver(s) have the best chance at solving the problem.  dsolve goes one further: it even tries to (heuristically) order the resulting set of solvers.  solve does do some pre-analysis too, but compared to dsolve, it is rather naive.  The interesting part about dsolve is how the pre-analysis phase is decoupled from the solving phase.  Other pieces, like the modern 'int' in IntegrationTools, use an eager approach: it has a list of (guard, algorithm) pairs which are globally ordered, and tries them in order.  If the guard is true, then the algorithm is tried immediately; if it works, the answer is returned, otherwise the next pair is tried.  simplify sometimes goes one step further: it applies a bunch of different heuristics separately and then checks after the fact which one was most successful and returns that answer.  That implies that more work than strictly necessary is done, but it also means that the results are much better.

One could very easily write a whole book on the software engineering challenges (and current solutions) of a Computer Algebra system.  There are already excellent books on the underlying algorithms out there, which is the part that the computer algebra community seems to most care about, but essentially nothing about the engineering.  It is very sad because 'modern' systems often get written with very naive architectures because it takes years of experience to understand (and re-invent!) what the long-time developers of Maple (and Macsyma and Mathematica and Axiom and ...) have had to learn the hard way.

 

It is extremely rare to find a corporation which has the confidence to be honest with its customers.  So admitting that a feature is no longer supported is rare indeed, especially if the feature is just 1 release old!    What you can read from your exchange with TechSupport is what happens when someone (ie techsupport people at Maplesoft) are honestly trying to help people but at the same time trying not to make the company look bad.  This is an impossible to resolve conflict when they are trying to help you with a feature that is clearly broken.  So you get surreal answers.

The most difficult part here is that the poor people in technical support really are trying to help you.  They are often just as frustrated as you are, because they are forced to explain away behaviour in the product which is clearly stupid.  But they have no say at all in these things, and so they feed you a company line.  And sometimes the company line is "it works as designed" because that is 100% true!  The failure, as you noticed, is that "as designed" and "in a way that is useful" can be completely disjoint.

Note that this is not something special about Maplesoft.  All consumer software producers have exactly the same problem.  At least for Maple there is less quicksand: when an integral is wrong, it is wrong.  But there are many areas where developers can simply claim that "it works as designed" even though the feature turns out to be rather useless.  Maybe some other day I'll write about why companies ever get themselves in the situation of producing huge bloated software with many useless (but as designed) features [Microsoft Windows being the leader of the pack].

It is extremely rare to find a corporation which has the confidence to be honest with its customers.  So admitting that a feature is no longer supported is rare indeed, especially if the feature is just 1 release old!    What you can read from your exchange with TechSupport is what happens when someone (ie techsupport people at Maplesoft) are honestly trying to help people but at the same time trying not to make the company look bad.  This is an impossible to resolve conflict when they are trying to help you with a feature that is clearly broken.  So you get surreal answers.

The most difficult part here is that the poor people in technical support really are trying to help you.  They are often just as frustrated as you are, because they are forced to explain away behaviour in the product which is clearly stupid.  But they have no say at all in these things, and so they feed you a company line.  And sometimes the company line is "it works as designed" because that is 100% true!  The failure, as you noticed, is that "as designed" and "in a way that is useful" can be completely disjoint.

Note that this is not something special about Maplesoft.  All consumer software producers have exactly the same problem.  At least for Maple there is less quicksand: when an integral is wrong, it is wrong.  But there are many areas where developers can simply claim that "it works as designed" even though the feature turns out to be rather useless.  Maybe some other day I'll write about why companies ever get themselves in the situation of producing huge bloated software with many useless (but as designed) features [Microsoft Windows being the leader of the pack].

  1. specop, anyop, typefunc, patfunc, specindex, typeindex, anyindex, patindex, patlist.
  2. still broken.  For example typematch([tt[x,y]], ['ff'::anyindex(name,name)]) works but typematch([tt[x,y]], ['ff'::anyindex(a1::name,a2::name)]); doesn't.

 You could use patmatch and applyrule, but I don't.  As far as I know, they have some long-standing bugs still unfixed.  Problem is, I could be wrong and they could have been fixed.  But since there is no way to find out but to exhaustively test them, that's too much trouble, so I avoid them.  This is why various ideas of independent test suites and independent bug repositories would be so useful!

The other thing is that I don't think that patmatch was ever taught about the full range of possibilities for indexing [which is where typematch fails], so that it may in fact not work at all for this purpose.  patmatch, unfortunately, is an 'orphaned' feature.

This is the type of bug which I most loved to hunt down and fix.  It is so full of fun and beautiful mathematics which one must do by hand (to double-check the same done in an automated fashion).  And it is often the case that the 'right' fix is really non-trivial.  All too often the place where things are obviously wrong are 'too late', as something subtle has already gone awry.  It never really felt like 'work' to fix such bugs! [Not to say that they were easy, in fact quite the opposite, but the challenge of fixing it, and doing it right, was highly rewarding]

I got bit by this in Maple 9.5 or 10.  It's possible it has been fixed.  It was not documented then either [I just checked].  I discovered "the hard way" that some structured types (documented in ?types,structured) are actually library functions!  This was 'new' since almost all the structured types are in the kernel.

I have found countless bugs by trying to copy-paste from one worksheet to another.  The variations are endless.  Copy all sorts of objects (individually and on their own) and try to paste them somewhere quite specific in a target worksheet.  The end-results range from exactly right to spectacularly wrong.  And the wrong ones can be rather entertaining too, as trying to puzzle out what has happened can be non-trivial.

Just so no one gets me wrong: I agree with Doug that there does not seem to be a big need for such a piece of funcitonality.  But there might be a need for a few bugs to be fixed before one can say "oh, just use copy-and-paste" with a straight face!

is a seriously under-appreciated command in Maple.  Used properly it can shorten code a lot while making its intent clearer than a similar piece of code full of nested op calls.  Plus it is more robust, in that it will fail to match when given 'bad' input, unlike a sequence of op which just might succeed but result in complete gibberish.

It unfortunately has a nasty drawback: it only works with structured types that are implemented in the kernel.  It will fail in weird and mysterious ways on library types.  And since where a type is implemented is actually not documented information...

If you trace int, you will see that it calls discont, which returns -I*ln(RootOf(22*_Z^4+_Z^8+4*_Z^6+4*_Z^2+1)) as a discontinuity.  So it's on the right track.  I am not quite sure exactly where things are going wrong, and it is taking me too long to debug this right now - that's someone else' job!

I am a little sad that Maple misses this discontinuity - I spent a lot of personal effort in this particular area [many years ago] as a thorough investigation of bugs in Maple found (at that time) that the single highest concentration of problems were in definite integration, and of those, the highest concentration was due to missed discontinuities or incorrect limits.  I spent several months fixing things in those areas, since it seemed to be where the highest ROI was.

I would be extremely curious to know what a modern analysis of bug density would give.  My personal guess is that the GUI would come way ahead of the math engine.  Of the math bugs, I would be that the area of "analysis" still accounts for the highest density of problems, but I wouldn't know more precisely than that where the concentration is.

If you trace int, you will see that it calls discont, which returns -I*ln(RootOf(22*_Z^4+_Z^8+4*_Z^6+4*_Z^2+1)) as a discontinuity.  So it's on the right track.  I am not quite sure exactly where things are going wrong, and it is taking me too long to debug this right now - that's someone else' job!

I am a little sad that Maple misses this discontinuity - I spent a lot of personal effort in this particular area [many years ago] as a thorough investigation of bugs in Maple found (at that time) that the single highest concentration of problems were in definite integration, and of those, the highest concentration was due to missed discontinuities or incorrect limits.  I spent several months fixing things in those areas, since it seemed to be where the highest ROI was.

I would be extremely curious to know what a modern analysis of bug density would give.  My personal guess is that the GUI would come way ahead of the math engine.  Of the math bugs, I would be that the area of "analysis" still accounts for the highest density of problems, but I wouldn't know more precisely than that where the concentration is.

[Sorry for the slow response, my week has been really busy]

Here are the steps, specialized for this case, but easily enough generalized.

typematch(indets(ToInert(eval(f)),specfunc(anything,_Inert_ASSIGNEDLOCALNAME))[1], _Inert_ASSIGNEDLOCALNAME(nam::string, anything, ptr::posint));
assign(pointto(ptr),24);
f();
                                  24

With a bit more care, one can find the right lexical that corresponds to the entry to be changed and change that one.  But it does take all of Inert Form + pointto + assign to get the job done.

If the hardware float version fails what is done is most other top-level routines is that the software version is automatically called.  So what you outline here [sorry, I missed that detail from your previous post] is that this isn't done.  That's a bug.

The 7 Digits part also seems like an independent bug.  Very strange.

First 24 25 26 27 28 29 30 Last Page 26 of 119