Paul

485 Reputation

8 Badges

14 years, 166 days
Paul DeMarco is the Director of Development at Maplesoft, a position that has him involved with technical planning and development of Maple and the various core technologies that use Maple as a computation engine. He joined Maplesoft in September 1996 while studying at the University of Waterloo in the CS/EEE program -- a track that combines core math and computer science courses with electrical engineering electives. Paul's development work in the Math and Kernel Groups over the years touches a wide variety of areas, including algorithms, data structures, and connectivity with other products. He is also involved with core Maple as well as Maple T.A.

MaplePrimes Activity


These are replies submitted by Paul

The initial value of currentdir() is taken from the "Start In" property of your desktop shortcut.  You can see or change this property when you right-click on the Maple icon on your Windows desktop and select "Properties".  For single-user installs it does make sense that a better default would be kernelopts(homedir).

Note that the value of currentdir() depends on the way you started Maple.  If you double click on a .mw file in Windows Explorer, then Maple will start with that document open as the default; and currentdir will be set to the path that contains that document.   If you start maplew.exe from a command prompt, Maple will inherit the current directory from that environment. 

 

 

There are a couple ways to get Maple 12 to use both of your processors on a dual core machine (aside from the fact that the gui and kernel are separate processes and will make use of a core each).

1.  Set the environment variable OMP_NUM_THREADS equal to the number of cpus on your machine.  This will cause certain routines to operate in parallel (primarily LinearAlgebra routines that call out to blas and clapack libraries).

2.  Make use of the Threads package to split your own algorithms across both cores.  See ?Multithreaded for more info.

There are a couple ways to get Maple 12 to use both of your processors on a dual core machine (aside from the fact that the gui and kernel are separate processes and will make use of a core each).

1.  Set the environment variable OMP_NUM_THREADS equal to the number of cpus on your machine.  This will cause certain routines to operate in parallel (primarily LinearAlgebra routines that call out to blas and clapack libraries).

2.  Make use of the Threads package to split your own algorithms across both cores.  See ?Multithreaded for more info.

Thanks Alec, indeed String[] works perfectly --  c# does all the right conversions behind the scenes.  I've updated the file link with the new code which passes in the argument -A2 at startup (just to show it can be done).  Verify this works in the tool by entering the command kernelopts(assertlevel);  This will return 2 as directed.  The default is 0.

traperror was not meant to be able to catch user interrupts.  The fact that this works at all is a bug, which aught to be fixed, so this code may not work in future versions.  As is, it isn't safe.  I wouldn't recommend using this code.  Maplesoft is considering adding to the external API to provide a function to do this in a safe way.

Indeed the change made to $MATLAB/toolbox/maple/subs.m should also be made to $MATLAB/toolbox/maple/@sym/subs.m . Click on these links to download the latest modification. -PD
What was happening here is that the cell array, {'a','b'}, was being converted to an array of strings in Maple. You can see this by executing the following Matlab command:
>> sym({'a','b'})

ans =

                                  ["a", "b"]
While the above behaviour is fine in general for converting cell arrays, it is not the intent for subs. One workaround is to directly create syms instead of strings in the call to subs:
>> subs('cos(a)+sin(b)',{sym('a'),sym('b')},{sym('alpha'),2})
Or, download subs.m, name it "subs.m" and put it in the $MATLAB/toolbox/maple directory. The "maple compaton" option does include all the partial options. The partial options are independent of each other and can be turned on individually in any combination. "maple compaton" is a short-cut for turning all the partial options on at once. -PD
Here is a modified version of sym.m for Maple/MTM 10 36_sym10b.txt. As before, rename it sym.m and put it in the $MATLAB/toolbox/maple/@sym directory. This works for the sym('[sqrt(x1); sin(x2)+cos(x3)/exp(x4)]') example as well as the ones before. It uses Matlab's exist() function to detect things that look like symbols but are not. As a note, no update is necessary for the Maple/MTM 11 patch presented earlier, as it uses a different and better mechanism. As for falling back to SMT, I don't think that's technically possible. In particular, because SMT uses an older version of the Maple kernel, there is no way to even switch back and forth in the same session -- the dlls would conflict. There have been surprisingly few reported compatibility issues. I don't expect this will be the last, but so far there hasn't been anything that couldn't be fixed fairly easily. Please continue to post when you find an issue. -PD
In Matlab look at the "maple" command help page ("help maple" at a command prompt). There are a number of documented compatibility switches including the following:
MAPLE FINDSYMON; MAPLE FINDSYMOFF toggles the type of result returned by the FINDSYM command. By default the result is an array of syms. With FINDSYMON, FINDSYM will return a string.
So, to get the SMT style result for findsym in MTM run the command "maple findsymon".
Ok, try this as a patch for MTM-10. Same deal, download 36_sym10.txt and save it as $MATLAB/toolbox/maple/@sym/sym.m.
Try this link for the sym.m file. For some reason I had to rename it as .txt instead of .m to post it here. Unfortunately the patch uses some Maple 11 features that weren't available in Maple 10/MTM-10, so it would be more complicated to implement. Is upgrading to 11 a possibility? -PD
Thanks for pointing this out. Please let me know if you find any more incompatibilities when using the Maple Toolbox for MATLAB. It is our intention that noone will have to reprogram m-files to make their old code work. Maple 11/Maple Toolbox For MATLAB 11 users can grab the following sym.m
file and save it as $MATLAB/toolbox/maple/@sym/sym.m The symbolic math toolbox tried to allow users to construct Maple expressions using MATLAB syntax. This is difficult as there are situations where Maple and MATLAB overlap. The patch above does not try to guess at Maple syntax -- it only invokes the Matlab array parser when the original string resulted in a syntax error. Note that this is not perfect. Consider the following:
>> sym('[a ; b]') ans = [a] [ ] [b] >> maple('whattype',ans) ans = Vector[column] >> sym('[a , b]') ans = [a, b] >> maple('whattype',ans) ans = list
What should [a,b] be? In native Maple, it is a list. In native Matlab, it is a row vector. A better way to construct this inside Matlab is to declare a and b as syms and then just use regular Matlab syntax.
>> syms a b >> [a; b] ans = [a] [ ] [b] >> maple('whattype',ans) ans = Vector[column] >> [a,b] ans = [a, b] >> maple('whattype',ans) ans = Vector[row]
-PD
You can provide a fill value other than zero by setting rts->fill. There is no way to skip the step of initializing the data block as uninitialized dag pointers will cause problems with the garbage collector. A data block with rts->foreign = 1 will behave like a normal Maple object in the sense that its members will be garbage collected if not referenced etc., except that the block itself will be left alone. Maple doesn't know how you allocated it, so it can't possibly free it. To get automatic notification of when nothing is referencing the rtable so you can free it, try stuffing a MaplePointer in an attribute of the rtable (rts.attributes). See ?MaplePointerSetDisposeFunction.
My example's array parameter was actually assigned to a local (y), but only in the outer procedure. I suppose you are saying that, in any deeper nesting of procedure calls, the table parameter would again have to be assigned to a local? That is, assigned to a local, in each inner procedure in which 1-level eval was wanted? That seems onerous.
Yes, whatever level deep, if the table is a parameter, it will get 2-level eval. Assigning it to a local to get 1-level eval only needs to be done in the proc that indexes into the table.
I still don't see why the level of evaluation for the elements of a table are deemed correct. Running an example with a list, instead of a table, produces exp(0) even from inside the inner procedure. I would claim that the level of evaluation of the entry of the table -- over and above what is needed to accomodate last-name-eval, is wrong. It would be right, were it to produce the same result as occurs when accessing the entry as happens in the list case. Even if one accepts the rationale (which I don't, sorry) it still seems like an overly expensive hack, to get around the fact that last-name-eval tables don't get their contents fully evaluated when first passed in from the top-level. It means that extra evaluation is done upon each and every subsequent access of each table entry, instead of just once per entry up front.
The primary rational for this behaviour at this point is to maintain backwards compatibility.
Wouldn't it be better to allow the programmer to choose whether to evaluate the table entries fully (just the once up front, or...)? I can see that it's tricky, of course. Suppose one wants to definitely not fully evaluate all the entries, and that one also wants somehow to get the level of evaluation that you described of some particular table entry. A mechanism for that is desirable. Having such a mechanism always take place is less desirable, although that is the current state of affairs -- excluding hacks to get around hacks.
Note that rtable-based Arrays don't have the same last-name evaluation rules that table-based arrays do. If possible use the newer data structure. Failing that, as a programmer you do have the choice of how it evaluates via assigning to a local variable, or leaving it as a parameter, or calling eval() explicitly.
Those many test failures that might occur when changing the evaluation rule for table parameters, they occur presumably because code was programmed to work around the current behaviour. Such test failures can't be much of a justification, in and of themselves. But the behaviour still seems hackish, and makes Maple's evaluation rules more complicated. I wouldn't know where to find them in the help-pages, other than ?updates,v40 .
Given that most workarounds would attempt to prevent evaluation, probably by simply using a local instead of a param, I don't see how the test failures would be caused by code gymnastics. Instead, I think the code is relying on the current behaviour. Even if we were to fix our own code, it wouldn't be nice to force our users to also dig deep into theirs.
For some (somewhat relevant) background, I found this interesting little excerpt from ?updates,v40:
New evaluation rules for Locals. The evaluation rules for local variables have changed to "one-level" evaluation instead of full evaluation. This means that evaluation of local variables is the same as evaluation of formal parameters, but different than that of global variables. For example, suppose the input to a Maple session is a := b; b := 1; a; Then what does last statement yield? In previous versions of Maple a; evaluated to 1 . In this version, if these statements are entered in an interactive session (a and b are global variables) then a; still evaluates to 1 . However, if a and b are local variables and these statements appear inside a Maple procedure, then a; evaluates to b . Users should not notice any differences in normal usage of Maple. If, however, it is desired to have full evaluation or one-level evaluation explicitly, the eval function (see below) provides this functionality.
The reason 1-level evaluation inside procedures works is that the initial call from the top level causes a full evaluation on the arguments. Subsequent inside-proc calls don't need to fully evaluate their parameters since that was already done once. Objects with last-name evaluation, like tables, don't get the full-eval going from the top level, so references into a table parameter get double-evaluation. Note that table locals get 1-level eval, so we easily can compare its effects: > a := array([x,y,z]): > x := 5: > proc(a) a[1]*a[2]; end(a); 5 y > proc(a) local b; b := a; b[1]*b[2]; end(a); x y When 'a' is a parameter, a[1] gets 2-level evaluation so you can see the value of x in the result. x is stored in the table as a symbol. 1-level eval, as in the second example, evaluates b[1] to x. The second level of eval is needed to go from x to 5. Changing 2-level eval of tables back to 1-level results in hundreds of test failures over a wide variety of functions, not just linalg, so this isn't something we can easily change. Note that the above trick of assigning the parameter to a local can be used as a workaround if you really don't want 2-level eval.
1 2 3 Page 2 of 3