acer

32333 Reputation

29 Badges

19 years, 323 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

See Law 1.

acer

See Law 1.

acer

What would you want a Trig package to do that cannot already be done in stock Maple?

acer

It can help to either provide judiciously chosen finite ranges or initial points. If (semi-)infinite or very large ranges are provided then it can be difficult to generate automatically an initial point for Newton's method that will converge. For large ranges it might happen that all the randomly (internally) chosen starting points do not succeed. (In your example, even the end-point p=0,q=0 does not necessarily succeed for every i=1..5.)

For example (this just happens to work too),

> print(i,fsolve({eq1,eq2},{p=100,q=0},p=0..infinity,q=0..infinity)) od:
                     1, {p = 3.270541946, q = 9.213310232}

                     2, {p = 12.08067575, q = 33.62875327}

                     3, {p = 27.22370650, q = 75.59475740}

                     4, {q = 131.5749898, p = 47.38042228}

                     5, {p = 69.94663338, q = 194.5364460}

acer

It can help to either provide judiciously chosen finite ranges or initial points. If (semi-)infinite or very large ranges are provided then it can be difficult to generate automatically an initial point for Newton's method that will converge. For large ranges it might happen that all the randomly (internally) chosen starting points do not succeed. (In your example, even the end-point p=0,q=0 does not necessarily succeed for every i=1..5.)

For example (this just happens to work too),

> print(i,fsolve({eq1,eq2},{p=100,q=0},p=0..infinity,q=0..infinity)) od:
                     1, {p = 3.270541946, q = 9.213310232}

                     2, {p = 12.08067575, q = 33.62875327}

                     3, {p = 27.22370650, q = 75.59475740}

                     4, {q = 131.5749898, p = 47.38042228}

                     5, {p = 69.94663338, q = 194.5364460}

acer

With discont=true the plot of myPDF had the spike I expected to see at x=-10. The plot of myCDF had the jump at x=-10 which I expected to see.

acer

With discont=true the plot of myPDF had the spike I expected to see at x=-10. The plot of myCDF had the jump at x=-10 which I expected to see.

acer

Firstly, you'll need outputoptions=datatype=float[8] on the RandomMatrix calls, or else Maple may spend a long time generating software float Matrices (which would just end up being converted to float[8] prior to being sent to MKL).

Next, be careful that it is the MatrixMatrixMultiply operation at which you are examining things. That will call the BLAS function dgemm (which is in ATLAS or MKL).

Also, you can compare the results of time() against those using time[real](). In Maple, the time() command returns the sum of the CPU-time spent in all threads, I believe. And time[real]() reports wall-clock time. On a machine not running any other application, the time[real]() result is close to how much CPU-time is used. Hence time() usually shows the same general result even when external code (eg, MKL) runs threaded, while time[real]() can show the speedup.

You won't see any speedup or multicore use during the RandomMatrix call, in Maple 13. I'm not sure about MatrixInverse (it can be done two ways, using lapack's inverse routine or its linear-solver routine for full rectangular storage, and both might not be implemented in every atlas/mkl version so it'd depend on which was used). The Matrix addition operation might hit something threaded in MKL, but it's only O(n^2) so the speedup would be less noticable than that in the other O(n^3) operations of multiplication and LU (for linear-solving/inversion).

acer

Firstly, you'll need outputoptions=datatype=float[8] on the RandomMatrix calls, or else Maple may spend a long time generating software float Matrices (which would just end up being converted to float[8] prior to being sent to MKL).

Next, be careful that it is the MatrixMatrixMultiply operation at which you are examining things. That will call the BLAS function dgemm (which is in ATLAS or MKL).

Also, you can compare the results of time() against those using time[real](). In Maple, the time() command returns the sum of the CPU-time spent in all threads, I believe. And time[real]() reports wall-clock time. On a machine not running any other application, the time[real]() result is close to how much CPU-time is used. Hence time() usually shows the same general result even when external code (eg, MKL) runs threaded, while time[real]() can show the speedup.

You won't see any speedup or multicore use during the RandomMatrix call, in Maple 13. I'm not sure about MatrixInverse (it can be done two ways, using lapack's inverse routine or its linear-solver routine for full rectangular storage, and both might not be implemented in every atlas/mkl version so it'd depend on which was used). The Matrix addition operation might hit something threaded in MKL, but it's only O(n^2) so the speedup would be less noticable than that in the other O(n^3) operations of multiplication and LU (for linear-solving/inversion).

acer

In 2D Math entry mode, open the "Operators" palette. The fourth and fifth items from the left in the third-to-last row are double vertical bars. Placed around a Matrix or Vector, these appear to cause LinearAlgebra:-Norm to get called. They also seem to allow subscripting the right-double-bar as a means to specify the norm.

That was in Maple 13. I didn't check Maple 12.

I'm not sure whether there is any difference between those two palette items, the Verbar and the DoubleVerticalBar. They also both appear in the Relational palette, and the DoubleVerticalBar also appears in the Fenced palette.

Unfortunately, neither appears when I tried command-completion on a single typed bar (pipe) in 2D Math mode. I will submit an SCR.

acer

What's the difficulty? Is it that you didn't expect `Norm` to invoke Student:-LinearAlgebra:-Norm? Or is it that it issues an error instead of returning unevaluated when passed an unassigned name? Why do you consider it wrong behaviour?

acer

Sure, although in your particular example the subspace spanned by the set of eigenvectors associated with that close pair (cluster) of eigenvalues is the same. I don't know whether the Original Poster's Matrix is floating-point, even if the description of the large size might hint at it. I was trying to suggest that in the floating-point scenario it'd be up to the OP to state how close eigenvalues were to be treated.

acer

Sure, although in your particular example the subspace spanned by the set of eigenvectors associated with that close pair (cluster) of eigenvalues is the same. I don't know whether the Original Poster's Matrix is floating-point, even if the description of the large size might hint at it. I was trying to suggest that in the floating-point scenario it'd be up to the OP to state how close eigenvalues were to be treated.

acer

I've suggested this elsewhere: allow the users to rate the content. If that were done, then the frontpage could have a section on top-rated content.

I have a feeling that things like the finance-ticker-stock-quote sockets code in alex_01's and Samir's posts would be the sort of fascinating content to float to the top. Also, Alec's and Robert's frequent tours de force would likely bubble to the top too.

acer

This has been improved in Maple 13.

In Maple 12.02,

> with(LinearAlgebra):
> N := 500:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> Matrix(A,'storage'='rectangular','datatype'=float[8]):
time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            12.392, 2424388, 626765

In Maple 13.01,

> with(LinearAlgebra):
> N := 500:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> Matrix(A,'storage'='rectangular','datatype'=float[8]):
> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.004, 2031244, 504051

The improvement is more significant as N grows. For N=1000, in Maple 12.02,

> with(LinearAlgebra):
> N := 1000:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> Matrix(A,'storage'='rectangular','datatype'=float[8]):
> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                           202.032, 9828600, 2482992

and in Maple 13.01,

> with(LinearAlgebra):
> N := 1000:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> Matrix(A,'storage'='rectangular','datatype'=float[8]):
> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.008, 8059452, 2004043

acer

First 480 481 482 483 484 485 486 Last Page 482 of 591