JacquesC

Prof. Jacques Carette

2401 Reputation

17 Badges

20 years, 85 days
McMaster University
Professor or university staff
Hamilton, Ontario, Canada

Social Networks and Content at Maplesoft.com

From a Maple perspective: I first started using it in 1985 (it was Maple 4.0, but I still have a Maple 3.3 manual!). Worked as a Maple tutor in 1987. Joined the company in 1991 as the sole GUI developer and wrote the first Windows version of Maple (for Windows 3.0). Founded the Math group in 1992. Worked remotely from France (still in Math, hosted by the ALGO project) from fall 1993 to summer 1996 where I did my PhD in complex dynamics in Orsay. Soon after I returned to Ontario, I became the Manager of the Math Group, which I grew from 2 people to 12 in 2.5 years. Got "promoted" into project management (for Maple 6, the last of the releases which allowed a lot of backward incompatibilities, aka the last time that design mistakes from the past were allowed to be fixed), and then moved on to an ill-fated web project (it was 1999 after all). After that, worked on coordinating the output from the (many!) research labs Maplesoft then worked with, as well as some Maple design and coding (inert form, the box model for Maplets, some aspects of MathML, context menus, a prototype compiler, and more), as well as some of the initial work on MapleNet. In 2002, an opportunity came up for a faculty position, which I took. After many years of being confronted with Maple weaknesses, I got a number of ideas of how I would go about 'doing better' -- but these ideas required a radical change of architecture, which I could not do within Maplesoft. I have been working on producing a 'better' system ever since.

MaplePrimes Activity


These are replies submitted by JacquesC

Thanks. And that probably means I am spending too much time on here instead of doing something else (like writing more computer algebra papers!).
The changes start on the vgln line.
> with(VectorCalculus):
> F:=VectorField(<-3*y^2-2*z,2*z-3*x^2,3*x^2+3*y^2>,cartesian[x,y,z]):
> A:=VectorField(,cartesian[x,y,z]):
> vgln:=F-Curl(A);
          /    2         /d             \   /d             \\ _
  vgln := |-3 y  - 2 z - |-- Az(x, y, z)| + |-- Ay(x, y, z)|| e  +
          \              \dy            /   \dz            //  x

        /         2   /d             \   /d             \\ _
        |2 z - 3 x  - |-- Ax(x, y, z)| + |-- Az(x, y, z)|| e  +
        \             \dz            /   \dx            //  y

        /   2      2   /d             \   /d             \\ _
        |3 x  + 3 y  - |-- Ay(x, y, z)| + |-- Ax(x, y, z)|| e
        \              \dx            /   \dy            //  z

> vgln2 := convert(vgln, 'list');

                2         /d             \   /d             \
  vgln2 := [-3 y  - 2 z - |-- Az(x, y, z)| + |-- Ay(x, y, z)|,
                          \dy            /   \dz            /

                 2   /d             \   /d             \
        2 z - 3 x  - |-- Ax(x, y, z)| + |-- Az(x, y, z)|,
                     \dz            /   \dx            /

           2      2   /d             \   /d             \
        3 x  + 3 y  - |-- Ay(x, y, z)| + |-- Ax(x, y, z)|]
                      \dx            /   \dy            /

> pdsolve(vgln2);

                   /
                  |        /d             \         2
  {Ay(x, y, z) =  |  2 z + |-- Az(x, y, z)| dz + 3 y  z + _F1(x, y),
                  |        \dy            /
                 /

        Az(x, y, z) = Az(x, y, z), Ax(x, y, z) =

          /                              /
         |  /d           \      2       |        /d             \
         |  |-- _F1(x, y)| - 3 y  dy +  |  2 z + |-- Az(x, y, z)| dz
         |  \dx          /              |        \dx            /
        /                              /

                                  2
         + _F2(x) + (-3 z - 3 y) x }

The changes start on the vgln line.
> with(VectorCalculus):
> F:=VectorField(<-3*y^2-2*z,2*z-3*x^2,3*x^2+3*y^2>,cartesian[x,y,z]):
> A:=VectorField(,cartesian[x,y,z]):
> vgln:=F-Curl(A);
          /    2         /d             \   /d             \\ _
  vgln := |-3 y  - 2 z - |-- Az(x, y, z)| + |-- Ay(x, y, z)|| e  +
          \              \dy            /   \dz            //  x

        /         2   /d             \   /d             \\ _
        |2 z - 3 x  - |-- Ax(x, y, z)| + |-- Az(x, y, z)|| e  +
        \             \dz            /   \dx            //  y

        /   2      2   /d             \   /d             \\ _
        |3 x  + 3 y  - |-- Ay(x, y, z)| + |-- Ax(x, y, z)|| e
        \              \dx            /   \dy            //  z

> vgln2 := convert(vgln, 'list');

                2         /d             \   /d             \
  vgln2 := [-3 y  - 2 z - |-- Az(x, y, z)| + |-- Ay(x, y, z)|,
                          \dy            /   \dz            /

                 2   /d             \   /d             \
        2 z - 3 x  - |-- Ax(x, y, z)| + |-- Az(x, y, z)|,
                     \dz            /   \dx            /

           2      2   /d             \   /d             \
        3 x  + 3 y  - |-- Ay(x, y, z)| + |-- Ax(x, y, z)|]
                      \dx            /   \dy            /

> pdsolve(vgln2);

                   /
                  |        /d             \         2
  {Ay(x, y, z) =  |  2 z + |-- Az(x, y, z)| dz + 3 y  z + _F1(x, y),
                  |        \dy            /
                 /

        Az(x, y, z) = Az(x, y, z), Ax(x, y, z) =

          /                              /
         |  /d           \      2       |        /d             \
         |  |-- _F1(x, y)| - 3 y  dy +  |  2 z + |-- Az(x, y, z)| dz
         |  \dx          /              |        \dx            /
        /                              /

                                  2
         + _F2(x) + (-3 z - 3 y) x }

At the very core of Maple, it uses hashing to make all objects 'unique'. This means that, over time, objects get distributed fairly uniformly over all of memory. This wreaks havoc with all caching mechanisms of the CPU. In other words, because Maple naturally has low memory locality of objects (when doing long symbolic computations, numerical linear algebra doesn't count), almost all accesses to an object will require going to actual memory instead of cache. Because memory access is 30-100 times slower than cache access, this is very noticeable.
In practice, as you show, this is not done. The problem is that not all piecewise functions are univariate over the reals, and there is only one representation. So one would have to mark those (say by using an attribute) so that evaluation can be fast. This is what I meant by "could be tagged".
In practice, as you show, this is not done. The problem is that not all piecewise functions are univariate over the reals, and there is only one representation. So one would have to mark those (say by using an attribute) so that evaluation can be fast. This is what I meant by "could be tagged".
You are looking for a symmetric tri-diagonal matrix which is conjugate to M1 := LinearAlgebra[CompanionMatrix](expand(ChebyshevT(5,x)/16), x, 'compact'); So your matrix is M2 := LinearAlgebra[BandMatrix]([[f,g,h,i],[a,b,c,d,e]],1,5,5,outputoptions=[shape=symmetric]); You can then "match up" the two LU decompositions in a clever way. I am sure there are other less brutal ways of solving this!
You are looking for a symmetric tri-diagonal matrix which is conjugate to M1 := LinearAlgebra[CompanionMatrix](expand(ChebyshevT(5,x)/16), x, 'compact'); So your matrix is M2 := LinearAlgebra[BandMatrix]([[f,g,h,i],[a,b,c,d,e]],1,5,5,outputoptions=[shape=symmetric]); You can then "match up" the two LU decompositions in a clever way. I am sure there are other less brutal ways of solving this!
Since Maple 7, the underlying theory (and implementation) of univariate piecewise over the reals has been changed to accommodate this. So piecewise could be 'tagged' when it is linear and binary search automatically used internally. For the curious, the theory is in my (recently accepted at ISSAC) paper A canonical form for some piecewise defined functions. What it does show is how the 'normalization' step of piecewise can be done in O(# of breaks) instead of exponential in the # of breaks (the previous algorithm).
Since Maple 7, the underlying theory (and implementation) of univariate piecewise over the reals has been changed to accommodate this. So piecewise could be 'tagged' when it is linear and binary search automatically used internally. For the curious, the theory is in my (recently accepted at ISSAC) paper A canonical form for some piecewise defined functions. What it does show is how the 'normalization' step of piecewise can be done in O(# of breaks) instead of exponential in the # of breaks (the previous algorithm).
It really is one complicated beast. However, until you have gotten to the bottom of C++ templates (see the STL and Boost), I don't think you have really gotten to the depth of how weird C++ really is -- it certainly rivals Maple. Yes, with verboseproc=3, what Maple prints out for entries in the remember table look like comments, but they're not. The problem is that this used to be no reliable syntax to pre-load entries into a function's remember table (there is sort-of one now, so apparently verboseproc output has yet to be updated -- someone should file a bug report!). Really it means BesselJ(0,0) := 1; There are various extension mechanisms for Maple -- see in particular the help pages ?evalf,details and ?extension. These document how to extend various Maple routines (old style and new style). There are 30-40 Maple routines that are extensible this way, but I was actually not able to find a help page detailing this! In any case, evalf is so extensible. For evaluating a function FOO, evalf looks up the name `evalf/FOO` -- if it exists, then it is called. So evalf('BesselJ'(args)) makes sure that BesselJ will not evaluate (ie the uneval quotes are used), and then calls evalf, which looks up evalf/BesselJ, which does the actual computation. So you are correct, this does not constitute a recursive call to BesselJ, but it does indicate that it is a call related to the same concept! (This is called 'intentional analysis' in some circles). As to what you get from the maple worksheet, that is really weird indeed - feels more like a bug. What mode were you using?
It really is one complicated beast. However, until you have gotten to the bottom of C++ templates (see the STL and Boost), I don't think you have really gotten to the depth of how weird C++ really is -- it certainly rivals Maple. Yes, with verboseproc=3, what Maple prints out for entries in the remember table look like comments, but they're not. The problem is that this used to be no reliable syntax to pre-load entries into a function's remember table (there is sort-of one now, so apparently verboseproc output has yet to be updated -- someone should file a bug report!). Really it means BesselJ(0,0) := 1; There are various extension mechanisms for Maple -- see in particular the help pages ?evalf,details and ?extension. These document how to extend various Maple routines (old style and new style). There are 30-40 Maple routines that are extensible this way, but I was actually not able to find a help page detailing this! In any case, evalf is so extensible. For evaluating a function FOO, evalf looks up the name `evalf/FOO` -- if it exists, then it is called. So evalf('BesselJ'(args)) makes sure that BesselJ will not evaluate (ie the uneval quotes are used), and then calls evalf, which looks up evalf/BesselJ, which does the actual computation. So you are correct, this does not constitute a recursive call to BesselJ, but it does indicate that it is a call related to the same concept! (This is called 'intentional analysis' in some circles). As to what you get from the maple worksheet, that is really weird indeed - feels more like a bug. What mode were you using?
If you filter out the protected names from what anames() returns, you'll (mostly) get user routines, which should make the save more efficient. There is a type protected.
If you filter out the protected names from what anames() returns, you'll (mostly) get user routines, which should make the save more efficient. There is a type protected.
Search on this for a post on 'save' by roman_pearce from a few days ago, he explains it well.
First 93 94 95 96 97 98 99 Last Page 95 of 119