JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 84 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

All I did was code improvements, you went and did algorithmic improvements! Cheating. Some people, sheesh. (insert smiley here -- but how do I do that?).
All I did was code improvements, you went and did algorithmic improvements! Cheating. Some people, sheesh. (insert smiley here -- but how do I do that?).
I use the <pre> tag instead of the <code> tag, for one. It preserves my spacing better. Otherwise it seems I have to fool around with lots of &nbsp; which is tiresome. Also, I use Classic, not Standard, whose cut and paste of code is a bit better (though still not great). And in fact, most of the time, I use vim to edit my code in text, and then copy-paste always works flawlessly!
I use the <pre> tag instead of the <code> tag, for one. It preserves my spacing better. Otherwise it seems I have to fool around with lots of &nbsp; which is tiresome. Also, I use Classic, not Standard, whose cut and paste of code is a bit better (though still not great). And in fact, most of the time, I use vim to edit my code in text, and then copy-paste always works flawlessly!
It helps to state all you know, in other words, assume(P>0,w>0). The results from Maple are correct, just in a different form than you expect. It is using the 2-argument form of arctan, which has better branch behaviour than the 1-argument form. It also reports all solutions, which involve the fact that sin/cos are 2*Pi periodic.
That is a very very serious bug in CodeGeneration. It not only means that it is seriously limited in scope to small programs, it also means that the fundamental algorithms it uses for some of its translations are O(n^2) instead of O(n), where n is related to the length of the code you are translating. Let's hope this has been fixed in Maple 11 already!
Fundamentally I want to agree with you. Rationally what you say really ought to be true. And certainly it was true. This is what made Maple so good in the early days, and what made it the product of choice over its main predecessor, Macsyma. That was because Macsyma was a huge beast, slow and lumbering. Maple was fast and tight. Once upon a time, it could run, comfortably, on a 1 Meg Mac!! Even cmaple (plus mkernel) takes up 6.7Megs of memory to start up (on Windows XP) [fairly constant for 9, 9.5 and 10]. Even when Mathematica came out with its snazzy interface, Maple still could run circles around Mathematica computationally. What about now? Take a look for example at the results of the Many Digits competition. For one thing, Maplesoft did not participate [I entered Maple at the request of the organizers; Maplesoft helped me by providing a temporary license of Maple], but Wolfram did participate. Interestingly, the Wolfram Research team only did the Basic Set of problems -- but totally cleaned up on that set. Maple's overall efficiency was acceptable, but just. Weirdly, the newer of the mainstream CASes, MuPAD, has somehow managed to be very Maple-like, yet slower overall. I still don't quite get this. But if you go a bit more niche, then Magma is a serious competitor. Slowly, one sees academic research in efficient algorithms drifting away from Maple (Mathematica was never really a serious contender there) towards Magma. Or take the LinBox Project (where a lot of names are familiar Maple names), and notice that while there is a Maple connection, the software is all built in C++. You are not alone in believing that speed matters, and that it might even sell product -- take a good look at the What's New in Mathematica 5.2 and you'll see that their Marketing department believes it. The amusing thing is that Maple has had 64 bit computing since 1992. Yep, 15 years. I know, I did the port to the Dec Alpha of Maple V Release 2 myself. In fact, I had that box in my living room for a couple of months. But apparently, now that's a really hot thing! In any case, comparing the What's New for Mathematica (5.2, 5.1, 5.0) and Maple's for the last few releases is a very interesting contrast. It is very clear that these two companies are headed in very different directions. Interesting, Maple started out as the more technically competent of the two, but with a dismal interface. Mathematica's interface started out quite good, improved a lot, but has been quite stable for a few years. They have been working extremely hard on their technical competence, and it shows - on a lot of fronts, a head-to-head competition might not be so good for Maple anymore. Maple sure is 'innovative' with its new interface... So I guess time will tell. In my subject line, I started with two questions: where should there be (more) speed, and why. Basic problem: there are hundreds of algorithms and thousands of pieces of functionality in Maple. While one can make them all faster by making the base system faster, that is a lot of effort without clear gain. If that gains the whole system 5% but took a couple of person-years to achieve, is that really a wise use of resources? If a few algorithms can be sped up 4 orders of magnitude, like what Joe Riel and the mapleprimes community achieved over at Generating an Array of Random Floats, which ones should those be? And why those? Actually, a long time ago I had started a "hierarchy diagram" of all Maple functionality. In other words, a directed graph of what functionality relied on what other functionality. To no computer algebraist's surprise, finite field arithmetic showed up as 'core' - but how many Maple users would have guessed that? Right now, I don't think that exact linear algebra would show up as 'core' - rightly or wrongly. Could you make a real case for that? Maybe I should go back to that project; I stopped because no one seemed to be all that interested. Actually, I suspect that some people were not so interested because it might have made it just a little too obvious how their pet project was on the periphery of Maple! Or maybe I am just a little too cynical.
If you take a look at the general programming languages community, you'll see that most are struggling to deal with this change. Some languages can deal with coarse-grained concurrency reasonably well, but generally all of today's programming languages are fundamentally sequential in nature. True parallelism is still rare, even though it has been a hot topic of research for decades. Scientific software (like Maple) has a much better chance of being able to leverage multiple cores than usual software (think Word processor for example). However, if too many of Maplesoft's customers emulate you, the company may not be around by the time a true multi-core version would be ready! Another way to look at it: take a good look in the "What's New" of the past few releases. They do not scream that 'speed' is a primary concern, do they? And when you look at user complaints (on primes, the newsgroups, etc), is 'speed' something that seems to be a big concern there?
I wonder how many people could actually puzzle out what is going on with that above? It involves a lot of rather complex knowledge of Maple. It would make a good test question for a maple 401 test!
There are over a dozen books about Maple out there (to name a few: by Andre Heck, Rob Corless, Bruno Salvy (and co-authors)). And, perhaps surprisingly, some of these books are in fact much more comprehensive that Maplesoft's own manuals. Be careful of worksheets on the Application Center: some of them are brilliant, some of them are atrocious. I have seen some astonishingly bad Maple code out there, and one can learn a lot of bad habits by thinking that all worksheets on the App Center are 'good'. It is really too bad that there isn't a 'mint' for worksheets!
There are over a dozen books about Maple out there (to name a few: by Andre Heck, Rob Corless, Bruno Salvy (and co-authors)). And, perhaps surprisingly, some of these books are in fact much more comprehensive that Maplesoft's own manuals. Be careful of worksheets on the Application Center: some of them are brilliant, some of them are atrocious. I have seen some astonishingly bad Maple code out there, and one can learn a lot of bad habits by thinking that all worksheets on the App Center are 'good'. It is really too bad that there isn't a 'mint' for worksheets!
I already knew that the Embedded Components had a strange interaction design (you can't get them to synchronize with each other). I had a feeling that there were various other properties of Maplets that made them better than Embedded Components. Thanks for confirming this Will. One has to wonder, why are there 2 pieces of functionality in the same product that achieve almost the same effect?
Acer's suggestion is very sweet indeed. You should probably move up to n=5*10^6 or even higher to really show that off. That would also show how memory use makes some of the other solutions degrade even more in comparison. And the difference in times between all of these is all about interpretation overhead: picks[4] is essentially all interpreter-bound (a high printlevel output makes that very clear), with the additional drawback of needing a lot of memory. Most improvements after that work on peeling back the pieces of the interpreter until very little is needed. picks[0.65] shows how far (with tricks) one can go. I did not know about the map trick [ie that map on an empty array was that much faster than all other methods of initializing things; this clearly points to an area where rtable can still be optimized!]. Very nice. But then one sees what happens when 100% of the computation is done in the kernel, with essentially no interpreter overhead: blinding speed! picks[4] is not an unnatural piece of code at all - and yet it is 4 orders (of 10) magnitude slower than the 'best' solution. No wonder a lot of people out there have an impression that Maple is slow [it's not, but there are probably less than 20 people in the world who can make Maple go "fast enough", and not enough of those are not somehow employed by Maplesoft!].
proc(n::posint) local t;
uses RandomTools; kernelopts(opaquemodules=false):
t := (a,b) -> MersenneTwister:-MTKernelInterface(3);
  Array(1..n, 1..2
        , t 
        , 'datatype'=float[8] 
       );
end proc
beats your picks[1.0] (on my computer anyways). Two tricks here: one is to avoid the intermediate data-structure (the list) altogether, the other is to do very aggressive manual inlining. I took a look at what was, in the end, actually executed, and used just that. I don't know of a way to eliminate both the intermediate data-structure and the intermediate function at the same time.
You should certainly read the help page ?RootOf Basically a RootOf represents an (algebraic) inversion. For example, RootOf(_Z^3-2) represents a cube-root of 2. But it can really represent the inversion of anything.
First 106 107 108 109 110 111 112 Last Page 108 of 119