Prof. Jacques Carette

2386 Reputation

17 Badges

19 years, 65 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity

These are replies submitted by JacquesC

Once the obvious obfuscations are removed, one is still left with a rather weird encoding of a state machine, with nasty overloading of how the 3 parameters of f are interpreted depending on the context.  Very clever indeed.

What would be really interesting to see is the 'same' computation but done in a transparent way.  I didn't de-obfuscate things until that point, I ran out of steam.

I completely agree with Allen on the future of DSLs, and how the higher-level they are, the easier it is to optimize them tons.  [Disagreeing with a Turing award winner is usually foolhardy and requires a lot of proof, else it's hardly credible].

Anyways, I've actually been working on this quite intensively for the past few years, and I'm totally convinced it works.  The main problem is that few programmers have any real clue what 'high level' really means!  In the Maple world, there is an easy example: a MapleSim diagram is sufficiently high-level.  Anything lower-level than that contains too many 'operational' details.  I guess another example would be the rubber-band model of placement for Maplets.  Properly written LaTeX would also qualify as reasonably high-level, but few people ever write 'proper' LaTeX, mostly because few people really understand Model-View-Controller well enough to understand how to separate model and view.  At least the web people eventually got it right (a proper web page uses CSS for the view aspects and XML for the model).  While the syntax of CSS is not great, it is conceptually a really good DSL for what it does.

The point about being able to do program analysis is also quite non-trivial: to be able to do that requires really and clear solid semantics for the language.  Unfortunately, for Maple (see my ex-student Stephen Forrest's work on the topic), it's just too bloody hard.  Maple's programming language has a very messy operational semantics, which means that it's extremely hard to analyze with any degree of precision.

I was tempted to simply reply "You're tilting at windmills", but that in itself would be a cultural reference that many on primes might not get!

It's a problem with the Internet in general, wherein an attempt to be 'cool' by using references to current pop culture prevalent in one (or even a set of countries) ends up alienating others completely.  Globalization is a really tricky issue.

Do not even 'pull from a number of existing websites', pull from one existing one.  Getting the right balance between various features is really hard -- and it presents no valued-add for Maplesoft.  Pick from a site where this is a fundamental feature (which means they have had to get it right), and adopt it.  Tweak it later when you have actual experience with how it really works with the Maple community, rather than guessing like made what that community will like.

I know, every company likes to spends tons of time from their employees reinventing what's already been done (often better) elsewhere.  So why be totally radicaly (not to mention insanely efficient) and borrow instead?  There are lots of web sites out there which have very well tuned reputations systems - start from that.

Hopefully there's a badge for 'best use of sarcasm'. 

The problem is that it looks like there are all sorts of critical values of S0 where you would get many different asymptotic series.  In fact, it seems the series is independent of S0, but asympt can't tell that without actually computing it [and it does not proceed because in general this would give you the wrong answer].

I know you want to compute it with S0 symbolic -- but my method allows you to reconstruct the symbolic expansion from sufficiently many 'samples' of exact (but numeric) S0.

1. use convert(..., exp) on your expression before you take asympt

2. Make S0 a ``random'' number between 0 and 1 (try several like 1/9, 2/5, 9/11, etc)

That worked for me, giving me the first 2 terms and an O(l) term.  The first 2 terms did not depend on S0 (although that would require some proof to be sure), but the third one did.

Assemble all the material from your blog posts and massage them into a chapter for the Advanced Programming Manual. 

And what about re-using some of your blog posts for posts on to the corporate site?  Wolfgram's blog posts are (on average) much more technical than Maplesoft's.  Yours could help with rebalancing that.

Right now, it's good.

Maybe I was just using the site at the same time as some backup process was going on, or some other such heavy process [considering the time/day-of-week, that would not have been unreasonable].

Right, let's take the information in Laurent's post as the baseline (doing otherwise with be a waste of everyone's time!), and see if we can come up with some constructive criticisms, which can hopefully lead to improvements.

As I said previously, having 2 input languages is not necessarily bad.  Perhaps what should happen is that it should be even clearer that that is what is going on.  By this I mean, go ahead and take the newfound freedom you've just given yourselves and really use it: make the 2d language even better, by allowing to drift even further from Maple's 1d language.  Figure out what makes it hard to enter some information in Maple, and make the 2d input better/more efficient for doing that than the 1d input.  Because of the structure of 2d (i.e. it does not translate 1-1 to Maple, but goes through an explicit interpretation layer), you can do that.  So take advantage of it!  The current 2d input is much too conservative an extension of 1d. 

The second thing to do would be to very seriously re-examine the design of 'implicit multiplication'.  I am not saying "remove the feature" here. However, it seems that many a new user to Maple encounters difficulties with it, at least as seen through the number of posts on mapleprimes where that's the problem.  This should be quite obvious.  So I say: please re-examine all the underlying assumptions of your design for 'implicit multiplication' parsing.  One of the assumptions is probably sub-optimal and the cause of the confusion [I have some guesses, but I'll leave that to the experts].  Then fix it.  I think that it is the mis-parsings due to a design flaw wrt implicit multiplication which is the source of 90% of the problems we see posted here.  It causes enough pain to your target market (new users) that it is really worth fixing.


Yes, I do believe they are first class.  First-class types are rare amongst programming languages - but reasonably natural in a mathematical setting.

The performance right now is quite sluggish - viewing pages is a bit slow, posting is very slow.

If I were to guess, I'd say you (still) have a resource leak somewhere server-side...

this is a dangerous trick.  You are (on purpose) creating 2 different x which happen to print the same.  That rapidly leads to madness.  [I learned this the hard way from debugging some processes which used 'frontend' extensively, and I ended up seeing gcd's of giant polynomials, where every single variable of the two polynomials was named O; not fun].

this is a dangerous trick.  You are (on purpose) creating 2 different x which happen to print the same.  That rapidly leads to madness.  [I learned this the hard way from debugging some processes which used 'frontend' extensively, and I ended up seeing gcd's of giant polynomials, where every single variable of the two polynomials was named O; not fun].

Take a look at typeclasses in Haskell or modules and functors in any ML variant.  Those are much closer to Axiom domains than the 'types' which only classify values.  To be a bit more precise:

domain ~ class instance ~ module implementation

category ~ class declaration ~ module type

You get a 2-level type system, where a basic type system is used to classify values, and a layer on top of that is used classify structures-with-operations (which is what mathematics is really all about!).

3 4 5 6 7 8 9 Last Page 5 of 119