acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Exporting `i` might just be super defensive -- an attempt to reduce interference between scripted Maple code to test stuff, and the stuff itself.

There are a few bits of interest, when one looks around names on the ?UndocumentedNames help-page. Take the lines that contain just 1; , and 2; , and 3; in :-SaveSession. Is that done for some purpose related to %,%%, and %%%?

acer

Exporting `i` might just be super defensive -- an attempt to reduce interference between scripted Maple code to test stuff, and the stuff itself.

There are a few bits of interest, when one looks around names on the ?UndocumentedNames help-page. Take the lines that contain just 1; , and 2; , and 3; in :-SaveSession. Is that done for some purpose related to %,%%, and %%%?

acer

I sometimes wonder to what degree apparent ubiquity of bugs may relate to strength of typing. (see python#Typing)

I also wonder why there is so little mention in the Maple manuals of techniques and facilities for testing one's code. How should one write unit tests for one's own programs and packages, for example? I'd like to put something here if I can find time, possibly using what is found in TestTools.

acer

I sometimes wonder to what degree apparent ubiquity of bugs may relate to strength of typing. (see python#Typing)

I also wonder why there is so little mention in the Maple manuals of techniques and facilities for testing one's code. How should one write unit tests for one's own programs and packages, for example? I'd like to put something here if I can find time, possibly using what is found in TestTools.

acer

You have ITS90 defined as a table(), with its entries being the procedures. You've essentially set it up as an old "table-based" package. The modern approach is to set such things up as a "module-based" package.

So, set ITS90 up as a module(). Declare Matrices A, B, C, and Q as module locals, with their explicit definitions inside the module body. Those Matrices only need to be assigned once, in order to be accessible in the procedure exports of the module (package). See my short example using module m and Matrix A above as a template for this. If you do that, then it should work, and entries of the Matrices should resolve when you call the package's procedures in your restarted sessions. For this recommended approach you should savelib to a .mla archive file.

What you are trying to do it not unusual. It's the typical sort of task for which modules were designed.

Alternatively, if you insist on using the older out-of-vogue table-based "package" set-up, then you would need to save both the table ITS90 as well as the Matrices A, B, C and Q to the .mla file. (A .m file might also work, but I would not recommend it.)

See the help-page on writing packages.

acer

You have ITS90 defined as a table(), with its entries being the procedures. You've essentially set it up as an old "table-based" package. The modern approach is to set such things up as a "module-based" package.

So, set ITS90 up as a module(). Declare Matrices A, B, C, and Q as module locals, with their explicit definitions inside the module body. Those Matrices only need to be assigned once, in order to be accessible in the procedure exports of the module (package). See my short example using module m and Matrix A above as a template for this. If you do that, then it should work, and entries of the Matrices should resolve when you call the package's procedures in your restarted sessions. For this recommended approach you should savelib to a .mla archive file.

What you are trying to do it not unusual. It's the typical sort of task for which modules were designed.

Alternatively, if you insist on using the older out-of-vogue table-based "package" set-up, then you would need to save both the table ITS90 as well as the Matrices A, B, C and Q to the .mla file. (A .m file might also work, but I would not recommend it.)

See the help-page on writing packages.

acer

I wouldn't be surprised if it needed 32bit glibc and the runtime linker ld-linux at least, because the mfsd binary under Maple's bin.X86_64_LINUX directory is linked dynamically to that. Ie,

%uname -i
x86_64

%file bin.X86_64_LINUX/mfsd
bin.X86_64_LINUX/mfsd: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.0.0, dynamically linked (uses shared libs), not stripped

%ldd bin.X86_64_LINUX/mfsd
        linux-gate.so.1 =>  (0xffffe000)
        libc.so.6 => /lib/libc.so.6 (0xf7daf000)
        /lib/ld-linux.so.2 (0xf7efa000)

So what is glibc in? Is it glibc-2.10.1-2.i586.rpm ? (Make sure, before you install anything that might clobber your 64bit runtime linker.)

As far as "not being able to locate something about the network" goes, wouldn't it help to state explicitly what the something was?

acer

See Law 1.

acer

See Law 1.

acer

What would you want a Trig package to do that cannot already be done in stock Maple?

acer

It can help to either provide judiciously chosen finite ranges or initial points. If (semi-)infinite or very large ranges are provided then it can be difficult to generate automatically an initial point for Newton's method that will converge. For large ranges it might happen that all the randomly (internally) chosen starting points do not succeed. (In your example, even the end-point p=0,q=0 does not necessarily succeed for every i=1..5.)

For example (this just happens to work too),

> print(i,fsolve({eq1,eq2},{p=100,q=0},p=0..infinity,q=0..infinity)) od:
                     1, {p = 3.270541946, q = 9.213310232}

                     2, {p = 12.08067575, q = 33.62875327}

                     3, {p = 27.22370650, q = 75.59475740}

                     4, {q = 131.5749898, p = 47.38042228}

                     5, {p = 69.94663338, q = 194.5364460}

acer

It can help to either provide judiciously chosen finite ranges or initial points. If (semi-)infinite or very large ranges are provided then it can be difficult to generate automatically an initial point for Newton's method that will converge. For large ranges it might happen that all the randomly (internally) chosen starting points do not succeed. (In your example, even the end-point p=0,q=0 does not necessarily succeed for every i=1..5.)

For example (this just happens to work too),

> print(i,fsolve({eq1,eq2},{p=100,q=0},p=0..infinity,q=0..infinity)) od:
                     1, {p = 3.270541946, q = 9.213310232}

                     2, {p = 12.08067575, q = 33.62875327}

                     3, {p = 27.22370650, q = 75.59475740}

                     4, {q = 131.5749898, p = 47.38042228}

                     5, {p = 69.94663338, q = 194.5364460}

acer

With discont=true the plot of myPDF had the spike I expected to see at x=-10. The plot of myCDF had the jump at x=-10 which I expected to see.

acer

With discont=true the plot of myPDF had the spike I expected to see at x=-10. The plot of myCDF had the jump at x=-10 which I expected to see.

acer

Firstly, you'll need outputoptions=datatype=float[8] on the RandomMatrix calls, or else Maple may spend a long time generating software float Matrices (which would just end up being converted to float[8] prior to being sent to MKL).

Next, be careful that it is the MatrixMatrixMultiply operation at which you are examining things. That will call the BLAS function dgemm (which is in ATLAS or MKL).

Also, you can compare the results of time() against those using time[real](). In Maple, the time() command returns the sum of the CPU-time spent in all threads, I believe. And time[real]() reports wall-clock time. On a machine not running any other application, the time[real]() result is close to how much CPU-time is used. Hence time() usually shows the same general result even when external code (eg, MKL) runs threaded, while time[real]() can show the speedup.

You won't see any speedup or multicore use during the RandomMatrix call, in Maple 13. I'm not sure about MatrixInverse (it can be done two ways, using lapack's inverse routine or its linear-solver routine for full rectangular storage, and both might not be implemented in every atlas/mkl version so it'd depend on which was used). The Matrix addition operation might hit something threaded in MKL, but it's only O(n^2) so the speedup would be less noticable than that in the other O(n^3) operations of multiplication and LU (for linear-solving/inversion).

acer

First 480 481 482 483 484 485 486 Last Page 482 of 592