JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 88 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

It was pointed out to me that type(infinity+infinity*x,polynom(constant,x)) returns 'true' by design. I had forgotten that 'constant' means "maple constant" rather than "numerical constant". In other words, type(true+false*x, polynom(constant, x)) is also true, by design. It is the use of that particular type check which is ill-suited to its purpose, rather than the type-check being ``wrong''. As others have mentioned recently, this is another case of 'computer science' and 'mathematics' clashing. While in this situation what we get is clearly a case of GIGO, it's not clear that there aren't subtle cases where this causes actual bugs. While on the topic of GIGO: solve(true*x-false, x); returns false/true, as does solve(true*x-false). More fun is that solve(infinity*x-false) returns 0 ! GIGO indeed!
The point is not to go for best, but to expect that things will actually improve, instead of maintaining the status quo. It's a simple game: Maplesoft expects all of us to upgrade to the latest version of the product, we expect them to improve the product in ways that are useful to us. Part of that means to upgrade some of the older technology to use something more modern. Rather like what was done with GMP, Threads, etc. Such improvements may not hit the front page of the new shiny brochures, but they sure are a powerful incentive for long-term users to upgrade.
The point is not to go for best, but to expect that things will actually improve, instead of maintaining the status quo. It's a simple game: Maplesoft expects all of us to upgrade to the latest version of the product, we expect them to improve the product in ways that are useful to us. Part of that means to upgrade some of the older technology to use something more modern. Rather like what was done with GMP, Threads, etc. Such improvements may not hit the front page of the new shiny brochures, but they sure are a powerful incentive for long-term users to upgrade.
Indicating syntax errors in a sane way is a "solved problem", and has been for over 15 years. The problem is that Maple's syntax technology is way older (based on yacc, which dates from 1977, based on theory even older). A place to start is Error messages: the neglected area of the man/machine interface, followed by What the Compiler Should Tell the User, and then maybe a foray into Automatic generation of useful syntax error messages. For something truly modern, Syntax Error Handling in Scannerless Generalized LR Parsers is a good read.
Indicating syntax errors in a sane way is a "solved problem", and has been for over 15 years. The problem is that Maple's syntax technology is way older (based on yacc, which dates from 1977, based on theory even older). A place to start is Error messages: the neglected area of the man/machine interface, followed by What the Compiler Should Tell the User, and then maybe a foray into Automatic generation of useful syntax error messages. For something truly modern, Syntax Error Handling in Scannerless Generalized LR Parsers is a good read.
Right - and no matter how high I go with the order, I still won't get anywhere with D(y)(6*x+4). If I assume that a0=0, then it gets interesting, as the result is O(ln(1/x)^u) with u depending on the Order of the series! However, what I really ought to have done is to compute the series at x=1 (since we know y(1)=0). When I do that, I get:
> eq3 := eval(eq, y=( z-> a1*(z-1)+a2*(z-1)^2 + a3*(z-1)^3 + O((z-1)^4))):
> series(eq3, x=1);

  /
  |5 + exp(5/4 ln(a1 - 9) signum(0, -Im(a2), 1) + 5/4 ln(a1 - 9)
  \

                    1                                      1     \
         + 5/4 ln(------) signum(0, -Im(a2), 1) - 5/4 ln(------))| +
                  a1 - 9                                 a1 - 9  /

        O(x - 1)
Right - and no matter how high I go with the order, I still won't get anywhere with D(y)(6*x+4). If I assume that a0=0, then it gets interesting, as the result is O(ln(1/x)^u) with u depending on the Order of the series! However, what I really ought to have done is to compute the series at x=1 (since we know y(1)=0). When I do that, I get:
> eq3 := eval(eq, y=( z-> a1*(z-1)+a2*(z-1)^2 + a3*(z-1)^3 + O((z-1)^4))):
> series(eq3, x=1);

  /
  |5 + exp(5/4 ln(a1 - 9) signum(0, -Im(a2), 1) + 5/4 ln(a1 - 9)
  \

                    1                                      1     \
         + 5/4 ln(------) signum(0, -Im(a2), 1) - 5/4 ln(------))| +
                  a1 - 9                                 a1 - 9  /

        O(x - 1)
The code you based yourself on is ancient. A more modern version would read
`convert/sectan` := proc(fFP,x) local t, trans;
    trans := table([
      'sin' = (a -> tan(a)/sec(a)),
      'cot' = (a -> 1/tan(a)),
      'cos' = (a -> 1/sec(a)),
      'csc' = (a -> sec(a)/tan(a)),
      'sinh' = (a -> tanh(a)/sech(a)),
      'cosh' = (a -> 1/sech(a)),
      'coth' = (a -> 1/tanh(a)),
      'csch' = (a -> sech(a)/tanh(a))]);

    t := `if`(nargs = 1,'anything','dependent'(x));
    evalindets(fFP, 'specfunc'(t, {'sin','cos','cot','csc','coth','sinh','cosh','csch'}),
        (proc(f) trans[op(0,f)](op(1,f)) end));
end
Note that the above also fixed a bug in your routine [in the 2-argument case, when the original expression already had sec or tan in it]. In reality, one would not want to dynamically re-evaluate that table each time, so it would probably be coded as
`convert/sectan` := module() export ModuleApply;
    local trans, to_trans;

    trans := table([
      'sin' = (a -> tan(a)/sec(a)),
      'cot' = (a -> 1/tan(a)),
      'cos' = (a -> 1/sec(a)),
      'csc' = (a -> sec(a)/tan(a)),
      'sinh' = (a -> tanh(a)/sech(a)),
      'cosh' = (a -> 1/sech(a)),
      'coth' = (a -> 1/tanh(a)),
      'csch' = (a -> sech(a)/tanh(a))]);
    to_trans := map(op, {indices(trans)});

    ModuleApply := proc(fFP,x) local t;
        t := `if`(nargs = 1,'anything','dependent'(x));
        evalindets(fFP, 'specfunc'(t, to_trans), (proc(f) trans[op(0,f)](op(1,f)) end));
    end proc;
end module;
Note how every piece of information in the above is in the code once. Repeated information is the bane of code maintenance. Also, trans and to_trans are computed are module creation time, so have no run-time overhead. There might be something that can be done with argument processing to simplify the routine a little more, but I must admit that I have not mastered that art yet!
gen := proc() local n; n := 0; proc() n := n + 1; 'c'[n] end end: f := n -> randpoly([x], dense, degree=n, coeffs=gen()): then you get
> f(5);

              5         4         3         2
        c[1] x  + c[2] x  + c[3] x  + c[4] x  + c[5] x + c[6]

If you want to wrap that up into one piece and have the coefficients in increasing order, you can do
g := proc(n) local cof, m;
    m := n+1;
    cof := proc() m := m - 1; 'c'[m] end;
    randpoly([x], dense, degree=n, coeffs=cof);
end;
Using a,b,c,d looks pretty, but makes it more difficult to program with. It is probably easier to write a small printer for your polynomials rather than try to program with a,b,c,d,...
Call them up, or email them directly (support@maplesoft.com). This is the kind of issue that they can help resolve most quickly - they want legitimate owners of Maple to be able to access the product as smoothly as possible. Really, give them a call, I am sure they will be able to help you.
Solve is one of the most difficult pieces of Maple code to understand [in my experience]. In the past, I have repeatedly tried to make some changes to solve (i.e. fix a bug), where I invariably 1) managed to fix that bug, but 2) break something else in the doing. I know that a huge amount of work has been done in modularizing solve [it used to be made up of just a few routines, each of which was gigantic, all of which were mutually recursive!], but unfortunately I am no longer 'current' on where solve is at now. However, tracing (using high printlevel) what solve does is still something I can do :-). In this case, the first hint that things are going to go horribly wrong is that SolveTools:-Transformers:-MakeConditions is called (good), but it adds to our equations the inequality x^3-2*x^2-x+2 <> 0 ! This is clearly because it treats 'infinity' as some (finite) real constant, so that solutions can only lie on those areas where we have no singularites -- oops. SolveTools:-Transformers:-RemoveTerms then proceeds to "normalize" our term, to the lovely looking x-infinity*x^3 +infinity*x^2+infinity*x-infinity, which becomes the new equation to "solve". Note how things like 2*infinity have been automatically simplified to just infinity, so that most of the structure of the polynomial is forgotten. At this point, the 'modern' solve tries a few more trick, none of which work, so it calls `solve/oldrec2` [the main solve routine used to be called `solve/rec`, where rec stands for recursive, which in turn called `solve/rec2`, where the division of labour between rec and rec2 is known to only a few rare initiates, and I was never part of that rarified club]. At this point, the system we are trying to solve is {x-infinity*x^3+infinity*x^2+infinity*x- infinity, x^3-2*x^2-x+2 <> 0} for x. As was commented on, things take an ever weirder twist because type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom({complex(numeric), radalgfun(constant)}, {x})) actually returns true! If things were not really weird already, they get weirder now. How so, well type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom(constant, x))) also returns true. A tiny bit of sanity creeps in since type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom({rational,algnum}, x))) returns false. So the task is handed off to SolveTools:-PolynomialSystem. At this point, factor is actually called on the inequality, so that we have {1,-1, 2} marked as non-solutions. Our polynomial (with infinity coefficients) also is passed to factor - but comes back as is. Our polynomial, being irreducible but of degree 3, will not be solved exactly, so RootOf is called. And RootOf tries to collect terms -- which ends up with the x^1 term being (infinity+1), which is "simplified" to just infinity, naturally. So now the polynomial to look at is just infinity*_Z^3-infinity*_Z^2-infinity*_Z+infinity. This RootOf is now "plugged-in" to the inequality -- and unsurprisingly comes out as being true (i.e. that RootOf does not simplify to 0 when plugged in to x^3-2*x^2-x+2). So the inequality is tossed out, having done its job. Now SolveTools:-UnwindRootOfs is called, which "simplifies" the RootOf to {x=1}, {x=-1} ! How? Well, first it replaces infinity with a symbol, and calls factors, ie factors(O*_Z^3-O*_Z^2-O*_Z+O) which correctly and diligently returns [1, [[_Z-1],2], [_Z+1,1], [O,1]]] (those are not zeroes, they are capital o). And there we see our "solutions"! In fact, it is interesting that the x=1 solution is actually a double-root. The number of places where things go from wrong-to-worse is quite astonishing. As far as I can tell, this single problem has unearthed something like 4 different, separate bugs.
Solve is one of the most difficult pieces of Maple code to understand [in my experience]. In the past, I have repeatedly tried to make some changes to solve (i.e. fix a bug), where I invariably 1) managed to fix that bug, but 2) break something else in the doing. I know that a huge amount of work has been done in modularizing solve [it used to be made up of just a few routines, each of which was gigantic, all of which were mutually recursive!], but unfortunately I am no longer 'current' on where solve is at now. However, tracing (using high printlevel) what solve does is still something I can do :-). In this case, the first hint that things are going to go horribly wrong is that SolveTools:-Transformers:-MakeConditions is called (good), but it adds to our equations the inequality x^3-2*x^2-x+2 <> 0 ! This is clearly because it treats 'infinity' as some (finite) real constant, so that solutions can only lie on those areas where we have no singularites -- oops. SolveTools:-Transformers:-RemoveTerms then proceeds to "normalize" our term, to the lovely looking x-infinity*x^3 +infinity*x^2+infinity*x-infinity, which becomes the new equation to "solve". Note how things like 2*infinity have been automatically simplified to just infinity, so that most of the structure of the polynomial is forgotten. At this point, the 'modern' solve tries a few more trick, none of which work, so it calls `solve/oldrec2` [the main solve routine used to be called `solve/rec`, where rec stands for recursive, which in turn called `solve/rec2`, where the division of labour between rec and rec2 is known to only a few rare initiates, and I was never part of that rarified club]. At this point, the system we are trying to solve is {x-infinity*x^3+infinity*x^2+infinity*x- infinity, x^3-2*x^2-x+2 <> 0} for x. As was commented on, things take an ever weirder twist because type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom({complex(numeric), radalgfun(constant)}, {x})) actually returns true! If things were not really weird already, they get weirder now. How so, well type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom(constant, x))) also returns true. A tiny bit of sanity creeps in since type(x-infinity*x^3+infinity*x^2+infinity*x-infinity, polynom({rational,algnum}, x))) returns false. So the task is handed off to SolveTools:-PolynomialSystem. At this point, factor is actually called on the inequality, so that we have {1,-1, 2} marked as non-solutions. Our polynomial (with infinity coefficients) also is passed to factor - but comes back as is. Our polynomial, being irreducible but of degree 3, will not be solved exactly, so RootOf is called. And RootOf tries to collect terms -- which ends up with the x^1 term being (infinity+1), which is "simplified" to just infinity, naturally. So now the polynomial to look at is just infinity*_Z^3-infinity*_Z^2-infinity*_Z+infinity. This RootOf is now "plugged-in" to the inequality -- and unsurprisingly comes out as being true (i.e. that RootOf does not simplify to 0 when plugged in to x^3-2*x^2-x+2). So the inequality is tossed out, having done its job. Now SolveTools:-UnwindRootOfs is called, which "simplifies" the RootOf to {x=1}, {x=-1} ! How? Well, first it replaces infinity with a symbol, and calls factors, ie factors(O*_Z^3-O*_Z^2-O*_Z+O) which correctly and diligently returns [1, [[_Z-1],2], [_Z+1,1], [O,1]]] (those are not zeroes, they are capital o). And there we see our "solutions"! In fact, it is interesting that the x=1 solution is actually a double-root. The number of places where things go from wrong-to-worse is quite astonishing. As far as I can tell, this single problem has unearthed something like 4 different, separate bugs.
Is rather boring in this case. The person who was coding various plotting routines was also doing a bunch of 'obvious' 2D coordinate systems. Clearly, when it came the time to do the 3D, more coordinate systems were needed - so some classic book was opened, and the coordinate systems found there were coded as-is. This was in 1991, and was done by someone doing a Master's in C.S., under the supervision of a C.S. prof at the University of Waterloo (this was before Maplesoft did any of its own Math development, that did not start until 1992 -- by me).
Is rather boring in this case. The person who was coding various plotting routines was also doing a bunch of 'obvious' 2D coordinate systems. Clearly, when it came the time to do the 3D, more coordinate systems were needed - so some classic book was opened, and the coordinate systems found there were coded as-is. This was in 1991, and was done by someone doing a Master's in C.S., under the supervision of a C.S. prof at the University of Waterloo (this was before Maplesoft did any of its own Math development, that did not start until 1992 -- by me).
This is a very common comment made by many, many of the people on MaplePrimes who have silver/gold/red Maple leafs. A definite majority of these people find Standard "frustrating", in that it slows down their work. What this means is that while there is a very deep pool of expertize here on MaplePrimes on "Maple" as a tool for getting mathematical problems solved, the depth of the expertize on the features of the Standard interface is very shallow [i.e. spread amongst much fewer people]. My guess is that this is a symptom of something deeper. The question is, how much does it matter?
First 59 60 61 62 63 64 65 Last Page 61 of 119