JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 84 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

Instead of using an arbitrary number (like 32 or 8) for the rank, what about looking through the resulting singular values instead for the most natural point where a 'jump' occurs?  In almost all cases, there is such a jump, sometimes more than one.  These all seem to correspond to natural breakpoints in the amount of information contained in the "rest".  A nice piece of Maple code to find those natural points would be a convenient addition to this fun and informative post.

[Sorry for coming late to this discussion, but I have had 2 really crazy weeks, and I am finally surfacing again]

I am quite interested in collaborating on this project.  I have a sourceforget.net account, I could certainly volunteer to start a project to host this.  While I am sure Git is fine, I am currently quite familiar with subversion, which sourceforge.net supports well.  So perhaps we could go for more standard technology for this first project?

John: the documentation Maplesoft provides to those wishing to write advanced packages (like a replacement for latex) is rather thin.  Unfortunately more people still have to learn how to do via trial and error, and via feedback from more experienced developers.  Additionally, it is easy to under-estimate the scale of writing a pretty-printer.  Sure, there are only ~35 inert forms, but there are over 200 "mathematical functions" known to Maple, a bunch more data-structures that are special, most of which need special rules as they have typical ways of printing which are not straightforward.  Also, there are all sorts of typographical rules for rational functions, radicals, and so on which will eventually need to be implemented.

If we work together, I do think that we can get something rather good together with a modicum of effort.  If this works, I am sure we can handle a few more!

My approach with the inverse correlation matrix was indeed too simplistic.  And I probably had another mental error where I had it in mind that correlation matrices were unitary [which would have preserved more properties], but that is not so!  So indeed we really need more details for the OP about the specification of the problem.  Of course, that might not help, as it may exceed the actual knowledge of statistics that many of us have!

My approach with the inverse correlation matrix was indeed too simplistic.  And I probably had another mental error where I had it in mind that correlation matrices were unitary [which would have preserved more properties], but that is not so!  So indeed we really need more details for the OP about the specification of the problem.  Of course, that might not help, as it may exceed the actual knowledge of statistics that many of us have!

I have read the code as well as the book on which the code is based.  They both definitely ignore some of the side-conditions in FTOC!  This is a well-known flaw of basing definite integration on the differential algebra view of anti-derivatives which has been amply documented in the literature.  There are current research projects ongoing which seek to remedy this flaw in the theory, but have only provided partial answers.  The current integrator tries to back-patch this issue by reverse-engineering the new discontinuities introduced in the integral and computing limits to patch over the jumps, but that is often a much harder problem than the original integral was -- and so predictably one encounters bugs.  Note that the theory underlying the computation of limits has holes in it which as just as large as those in integration [the problems are that the theory relies on a theory of series which is too weak, as well as the fact that the theory requires a decision procedure for zero recognition, which is a well-known undecidable problem].

The typical trick is to generate from a uniform[0,1] distribution and then use the inverse of the CDF to get that mapped to any other distribution.  Here you'll want to generate random vectors of length 20 and then first use the inverse of that 20*20 correlation matrix before using the inverse CDF.  Or at least, something like that should work.

Of course, one can expect such a method to be extremely badly behaved numerically, so that many guard Digits would be needed.  A more direct method could avoid this, but I could not find a decent reference in the time I had.

The typical trick is to generate from a uniform[0,1] distribution and then use the inverse of the CDF to get that mapped to any other distribution.  Here you'll want to generate random vectors of length 20 and then first use the inverse of that 20*20 correlation matrix before using the inverse CDF.  Or at least, something like that should work.

Of course, one can expect such a method to be extremely badly behaved numerically, so that many guard Digits would be needed.  A more direct method could avoid this, but I could not find a decent reference in the time I had.

The paper is very much about dissecting Maple's library...

The paper is very much about dissecting Maple's library...

I apparently had your name stored in equivalence-class format in my brain instead of the more correct unique-spelling format!  [That is meant to be funny, but the apology is heartfelt].

I apparently had your name stored in equivalence-class format in my brain instead of the more correct unique-spelling format!  [That is meant to be funny, but the apology is heartfelt].

Nobody?  You should know better... ;-)

With my student Stephen Forrest, we have explored the issue of type inference for Maple, resulting in the publication Mining Maple Code for Contracts, a progress report given at Calculemus 2007, which subsequently became his Master's Thesis titled Property Inference for Maple: An Application of Abstract Interpretation (I should ask him to put a copy online!).

He has code that can reconstruct a proper Maple type from a lot of different pieces of Maple code, and not just "mathematical" functions.

Nobody?  You should know better... ;-)

With my student Stephen Forrest, we have explored the issue of type inference for Maple, resulting in the publication Mining Maple Code for Contracts, a progress report given at Calculemus 2007, which subsequently became his Master's Thesis titled Property Inference for Maple: An Application of Abstract Interpretation (I should ask him to put a copy online!).

He has code that can reconstruct a proper Maple type from a lot of different pieces of Maple code, and not just "mathematical" functions.

Its intent is indeed as Alessandro states - it is for finding out about particular special functions.  Other routines (like discont, iscont, etc) are where the algorithms lie.  In due time, the best software structure would be if algorithmic routines queried a central database for all this information.  Right now information about properties of special functions is scattered throughout the library [unless this has been fixed since a few years ago, but I would have expected to have read something in the What's New if that were the case].

Its intent is indeed as Alessandro states - it is for finding out about particular special functions.  Other routines (like discont, iscont, etc) are where the algorithms lie.  In due time, the best software structure would be if algorithmic routines queried a central database for all this information.  Right now information about properties of special functions is scattered throughout the library [unless this has been fixed since a few years ago, but I would have expected to have read something in the What's New if that were the case].

First 28 29 30 31 32 33 34 Last Page 30 of 119