acer

32313 Reputation

29 Badges

19 years, 314 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I forgot about that. Brilliant. Thanks, Robert.

It does pretty much exactly what my code above does. (Like my code, it too overwrites the orginal Matrix with the superimposed L and U factors.) And so determinant of a 6000x6000 nonsparse float[8] takes 280MB -- pretty much the memory only to hold the Matrix itself.

So the original poster could just ensure that the Matrix is created with datatype=float[8], and not with storage=sparse, and then do your LUDecomposition call above.

acer

I do not know why there may be a memory restriction on your machine. What operating system is it?

acer

I do not know why there may be a memory restriction on your machine. What operating system is it?

acer

I wish that all Maple manuals could be written as well as this was.

acer

The code fragment like,

try
  # stuff
catch:
  error;
end try;

will rethrow the error. And so, yes, unless that fragment is within the scope of some other try..catch or traperror (possibly at a higher level) then the computation will stop (as usual) and the error message will be emitted.

Sometimes code is written that way so that errors do not appear to come from mysterious inner routines deep in the bowels of Maple. I wasn't sure whether that was what you might have been after.

I'm sorry but I don't know why the error form is different when emitted by parameter-processing.

acer

At first it seemed that you were just interested in knowing how to manipulate the last "error" results, and perhaps control how they got reprinted.

But if all you need to to is catch something, insert a command, and then rethrow the very same error, then that too can be done.

> F:=proc(LL::{set(list(integer)), list(list(integer))})
>     LL;
>  end proc:
>
> for B in [{[1,2]},{[1,1/2]}] do
>    try
>       print(F(B));
>   catch:
>   V := 9:
>   error;
> end try:
> od:
                                   {[1, 2]}
 
Error, invalid input: F expects its 1st argument, LL, to be of type
{list(list(integer)), set(list(integer))}, but received {[1, 1/2]}
> V;
                                       9

And just for fun, instead of printing your could use WARNING(),

> WARNING(StringTools:-FormatMessage(lastexception[2..-1]));
Warning, invalid input: F expects its 1st argument, LL, to be of type
{list(list(integer)), set(list(integer))}, but received {[1, 1/2]}

acer

In Maple 11 the help-page ?lasterror (or ?traperror) says that the functionality is obsolete, and that try..catch should be used instead.

In Maple 12 the help-page ?lasterror (or ?traperror) shows that it has gone from "obsolete" to "deprecated".

There is an Example on the ?try help-page which illustrates the use of lastexception. And lastexception is mentioned on the ?error help-page. But lastexception should be described in the main body of the ?try help-page, instead of just occuring in an Example on that page.

Another way to look at it is that the replacement of traperror by try..catch mirrors the replacement of lasterror by lastexception. The pair of replacements could be documented together more clearly.

The help-page ?deprecated could mention it too. It has a double-column list that shows try..catch as replacing traperror but has no line indicating that lastexception replaces lasterror.

acer

Use lastexception instead.

acer

Are you using procedures to split up the subtasks? When you write of periodically "outputting" a matrix what precisely do you mean? Is it writing results to a file? Are none of those results needed later on? The question about procedures may be relevant -- maple's memory management and garbage collection relies on some objects no longer being referenced, and that in turn is linked to the concept of levels (of precedure calls).

It'd be so much easier to help if you could upload you source here (using the file manager buttons).

acer

Are you using procedures to split up the subtasks? When you write of periodically "outputting" a matrix what precisely do you mean? Is it writing results to a file? Are none of those results needed later on? The question about procedures may be relevant -- maple's memory management and garbage collection relies on some objects no longer being referenced, and that in turn is linked to the concept of levels (of precedure calls).

It'd be so much easier to help if you could upload you source here (using the file manager buttons).

acer

To create Matrices and Vectors with double precision real (C double) entries, stored in contiguous memory arrays, use the option datatype=float[8] when calling the Matrix and Vector constructors. And similarly use complex[8] for paired hardware doubles representing complex entries.

To ensure that the LinearAlgebra operations with these float[8] Matrices and Vectors always is done using hardware double precision external libraries (and not with arbitrary precision software float external libraries) then do the following:

  • set UseHardwareFloats:=true;
  • (or) keep Digits less than evalhf(Digits)

You might also try setting infolevel[LinearAlgebra]:=2 since that will allow LinearAlgebra commands to print additional information about which external routine is being used, and about whether copying from hardware to software datatypes is taking place. (The external function hw_f06ecf is hardware BLAS function daxpy, for example, while sw_f06ecf is the software float equivalent. The hw_ or sw_ prefix which can appear in this printed information is thus a key.)

The option hfloat switch for a procedure does not accomplish the items above. That is to say, it does not enable float[8] Matrix and Vector construction without the explicit datatype options being provided, it does not enforce use of hardware external libraries, and it does not prevent internal software float Matrix/Vector copying.

Instead, the hfloat option of a procedure affects scalars. It toggles the automatic creation of scalar floats as HFloat objects, and it toggles the retrieval of scalar entries from float[8] Matrices and Vectors also to be HFloat objects.

acer

This sounds similar to what I had suggested here (last paragraph). I actually implemented a rough version of this, which allowed me to extract the representation being used by Maple internally to store the GMP result of an evalf[n](Pi) call. And then a quick conversion to base 2 was easy. But I didn't bother to fix it up nicely for general use. I used an external call to DAXPY to copy the memory to an appropriate rtable, using an offset to the address of the DAG as the copying source. Presumably one could do the same thing using asembler, again using the address of the DAG of the stored GMP number.

The essence is just that Maple already has a nice internal representation in some 2^m base of the evalf[n](Pi) result, so there's not much more to do ideally than to examine just that.

Of course, as Jacques mentioned, there may be even better techniques working to directly generate a result in base 2^m, avoiding radix-10 high precision computation of the number as the initial step. That wouldn't be done in Maple proper, I guess.

acer

I would think that this can now be done even faster in Maple 12 using the new Bits package.

(edited: By "this" I mean conversion to base 2, not the original stated end goal of generating long bit strings with certain statistical properties.)

acer

Hi Joe, is BytesPerWord the same as kernelopts(wordsize)/8 ?

You might prefer to make that integer datatype's width be dynamic in your code above, rather than hardcoded at preprocessor (ie. read) time with a $define.

acer

The problem appears to be line 30 (Maple 12) of the routine ArrayTools:-AddAlongDimension2D .

> restart:
> kernelopts(opaquemodules=false):
> showstat(ArrayTools:-AddAlongDimension2D);

The line (30) which creates the object to contain the result looks like,

  x := Vector[row](nrows,('datatype') = Datatypey);

It should probably instead be,

  x := Vector[row](ncols,('datatype') = Datatypey);

That Vector is acted on in-place by NAG f06ecf which is really just the BLAS function daxpy. Line 33 shows that it will try to add ncols entries from y to whatever is already in ncols entries of x (since incx the x stride is 1). So x better be of length at least ncols (and not just nrows which in this example is smaller).

acer

First 535 536 537 538 539 540 541 Last Page 537 of 591