Dave L

527 Reputation

18 Badges

20 years, 9 days

MaplePrimes Activity


These are replies submitted by Dave L

It may be of interest to the OP that the methodology that Joe has just described is to how the vast majority Maplesoft's Library code for Maple's packages is handled.

Thank you for your report.

If anyone else (who is running Maple 2022.0 on Ubunto 20.04) is unable to reproduce this then I'd be interested in knowing what hardware CPU is involved.

I'd also be interested in anyone's reproducing this, naturally.

Thanks,
Dave Linder, Math Group, Maplesoft

I deleted a comment that contained the body of this copyrighted Library procedure. See also the results from print(`evala/toprof`) .

op(3,eval(`evala/toprof`));

remember, Copyright (c) 1991 by the University of Waterloo. All rights reserved.

I'll see about getting an updated help archive for the package onto the Application Center, for use with Maple 2016.0.

I'll also submit a ticket against the reported .hdb/.help conversion problem with Maple 2016.0.

In the meantime please don't post copies of someone else's material here without permisson, even if you're running it through a help-format conversion mechanism. It breaches terms of use of this site, and I'll delete such responses.

Thanks to all for your consideration,
Dave Linder
Mathematical Software, Maplesoft

@Markiyan Hirnyk 

op(3,eval(`evalf/int`));


                      Copyright (c) 1997 Waterloo Maple Inc. All rights reserved.

Don't post the entire body of copyrighted procedures for which you do not own the copyright. See the Conduct section of the Terms of Use of this site.

@David Mazziotti 

It looks like MKL is looking for libmkl_avx.so or libmkl_avx2.so. I have no reports of such problems on hardware that doesn't support AVX/AVX2 instruction sets.

On 64bit Linux, Maple 18.00 uses ATLAS (like Maple 16 and 17) rather than MKL. The MKL version in use in Maple 18.01 on that platform is 2013.1.117 (or version 11.0.1.117) which you may be able to find with the Intel compiler/composer suite.

The MKL offers several different linkage models. In the model Maple is using some shared libraries (MKL redistributables) must be present but are not actually linked to the application (Maple) at build. So you may be able copy in the right files, provided the versions agree.

You may be able to enable the MKL (nonfunctioning on your particular AVX/AVX2 Xeon machine) by copying (if you have it... more below on that aspect)

  composer_xe_2013.1.117/mkl/lib/intel64/libmkl_avx.so

  composer_xe_2013.1.117/mkl/lib/intel64/libmkl_avx2.so

to your bin.X86_64_LINUX directory. Augmenting the path might also suffice. Your path editing may well have fixed that aspect for you, if the versions agree.

You are quite right in that the MKL 11.x does indeed make much of Maple's double precision datatype=float[8] LinearAlgebra faster. See this post.

We'll investigate further, to see about such additional binaries that might be required for hardware such as yours. Thanks very much for the report!

It may turn out that this is not the cause of the failure in Maple 18 of the line-search implementation in your code. You mentioned "optimization", which may include linear regression. The LAPACK method used by LinearAlgebra:-SingularValue changed in 18.00 to become DGESDD. It's difficult for me to tell without more specifics of the code whether that is pertinent here.

Please contact me through this site, if you feel that you can privately disclose any relevant portion of the code.

Dave Linder 
Mathematical software, Maplesoft.

Here are some details of platform specific performance improvements.

On Linux® and OS X® the performance of many commands related to hardware double precision linear algebra are improved in the 64bit versions of Maple 18.01.

The data in the plots below represent wall-clock timing (versus Matrix size) of the average of 20 repetitions of calls to a sample of LinearAlgebra package commands on random dataype=float[8] Matrices.

For each plot all computations were performed in the same session, using defaults for garbage collection settings. The multi-core hosts on which the computations were performed had no other significant load.

Here are results for 64bit Maple for Linux®, run on a quad-core 2.80GHz Intel Core® i5. The data includes results for Maple 16.02, Maple 17.02, and 18.01.

 




















































LA1801Linux64.mw

Here are results for 64bit Maple for OS X®, run on a quad-core 2.63GHz Intel Xeon®. The data includes results for Maple 17.02 and 18.01.



















































LA1801OSX.mw

 

Dave Linder

Mathematical Software, Maplesoft

 

Intel®, Intel Core®, and Xeon® are trademarks of the Intel Corporation. OS X® is a trademark of Apple Inc. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. All other brands and trademarks are the property of their respective owners.

@Thomas Richard Thanks to all. I shall enter this as an SCR in our database and assign it to the appropriate developer.

Dave L, Maplesoft

There are have been quite a few questions on Mapleprimes, about numerically deriving additional information from the solutions returned by dsolve(..,numeric). That includes higher derivatives, integrals, inverting for particular function values, etc.

As it happens, I was having coffee last week with the principle developer of dsolve,numeric, and I asked him pointedly about all this. The answer I got back was pretty unequivocal: dsolve itself is the best thing to use in these situations, in general, because it will try to ensure that the requested accuracy (whether specified by default or explicit tolerance options) is attained for the requested quantities.

Consider fdiff (or evalf@D, which has a hook to fdiff). It will not compute as carefully as dsolve,numeric would. If dsolve,numeric detects difficulties near a requested point then it can automatically adjust stepsizes, etc. But that won't happen when using fdiff.

Let's compare the two methods given so far, pushed out only as far as the modest value of t=2.0. Fortunately we can compare against the exact result, which can be evalf'd at high working precision.

As we can see below, the result from dsolve,numeric itself is pretty much accurate up to its own default tolerances (which is incorporated into the procedures which it returns). The nested fdiff results, on the other hand, get wildly wrong as Digits changes.

solexact:=rhs(dsolve(sys union IC));

                              /1  4\
                           exp|- t |
                              \4   /

d2xexact:=diff(solexact,t,t):

evalf[500](eval(d2xexact,t=2)): evalf[15](%);

                        4149.45940251896

Digits:=10:

b(2.0);

                          4131.100000

D2X(2.0);

                        4149.459397643907

Digits:=16:

b(2.0);

                                         8
                     1.990000000000000 10 

D2X(2.0);

                        4149.459397643907

Digits:=20:

b(2.0);

                               0.

D2X(2.0);

                       4149.459397643907

Certainly there are easier ways, for lots of possible examples. But that is relatively unimportant, in terms of the general question.

Dave Linder
Mathematical Software, Maplesoft

There are have been quite a few questions on Mapleprimes, about numerically deriving additional information from the solutions returned by dsolve(..,numeric). That includes higher derivatives, integrals, inverting for particular function values, etc.

As it happens, I was having coffee last week with the principle developer of dsolve,numeric, and I asked him pointedly about all this. The answer I got back was pretty unequivocal: dsolve itself is the best thing to use in these situations, in general, because it will try to ensure that the requested accuracy (whether specified by default or explicit tolerance options) is attained for the requested quantities.

Consider fdiff (or evalf@D, which has a hook to fdiff). It will not compute as carefully as dsolve,numeric would. If dsolve,numeric detects difficulties near a requested point then it can automatically adjust stepsizes, etc. But that won't happen when using fdiff.

Let's compare the two methods given so far, pushed out only as far as the modest value of t=2.0. Fortunately we can compare against the exact result, which can be evalf'd at high working precision.

As we can see below, the result from dsolve,numeric itself is pretty much accurate up to its own default tolerances (which is incorporated into the procedures which it returns). The nested fdiff results, on the other hand, get wildly wrong as Digits changes.

solexact:=rhs(dsolve(sys union IC));

                              /1  4\
                           exp|- t |
                              \4   /

d2xexact:=diff(solexact,t,t):

evalf[500](eval(d2xexact,t=2)): evalf[15](%);

                        4149.45940251896

Digits:=10:

b(2.0);

                          4131.100000

D2X(2.0);

                        4149.459397643907

Digits:=16:

b(2.0);

                                         8
                     1.990000000000000 10 

D2X(2.0);

                        4149.459397643907

Digits:=20:

b(2.0);

                               0.

D2X(2.0);

                       4149.459397643907

Certainly there are easier ways, for lots of possible examples. But that is relatively unimportant, in terms of the general question.

Dave Linder
Mathematical Software, Maplesoft

There is also ArrayTools:-BlockCopy, which might be useful here. It's not completely clear, because the submitter did not state whether S1,S2,S3... must necessarily exist as Matrices in their own right. Perhaps they needn't be formed separately from M, thus bringing additional savings in memory.

ArrayTools:-Alias can save the need to form full copies when converting to 1 dimension, but it cannot always be used even earlier in order to remove the need to explictly form wholly interior blocks S[i] of M as sepaerate Matrices. ...because Alias needs a contiguous section of memory, but a subblock generally takes up memory as separated segments.

Using ArrayTools:-BlockCopy (or ArrayTools:-Copy for simpler cases) whole interior blocks of data from Matrix `M` might be copied directly into the right parts of Matrix `out`.

As an example:

restart:

N:=60000:
M:=LinearAlgebra:-RandomMatrix(3,3*N):

st:=time[real]():
for i from 1 to N do
   S[i]:=M[1..3,3*(i-1)+1..3*i]:
end do:
out:=ArrayTools:-Concatenate(2,
           seq(convert(S[i],Vector),i=1..N)):
time[real]()-st;
                             65.493

st:=time[real]():
Q:=Matrix(3*3,N):
for i from 1 to N do
   ArrayTools:-BlockCopy(M,(i-1)*9,3,3,3,Q,(i-1)*9,9,9,1);
end do:
time[real]()-st;
                             0.170

LinearAlgebra:-Norm(Q-out);
                               0

Dave Linder
Mathematical Software, Maplesoft

There is also ArrayTools:-BlockCopy, which might be useful here. It's not completely clear, because the submitter did not state whether S1,S2,S3... must necessarily exist as Matrices in their own right. Perhaps they needn't be formed separately from M, thus bringing additional savings in memory.

ArrayTools:-Alias can save the need to form full copies when converting to 1 dimension, but it cannot always be used even earlier in order to remove the need to explictly form wholly interior blocks S[i] of M as sepaerate Matrices. ...because Alias needs a contiguous section of memory, but a subblock generally takes up memory as separated segments.

Using ArrayTools:-BlockCopy (or ArrayTools:-Copy for simpler cases) whole interior blocks of data from Matrix `M` might be copied directly into the right parts of Matrix `out`.

As an example:

restart:

N:=60000:
M:=LinearAlgebra:-RandomMatrix(3,3*N):

st:=time[real]():
for i from 1 to N do
   S[i]:=M[1..3,3*(i-1)+1..3*i]:
end do:
out:=ArrayTools:-Concatenate(2,
           seq(convert(S[i],Vector),i=1..N)):
time[real]()-st;
                             65.493

st:=time[real]():
Q:=Matrix(3*3,N):
for i from 1 to N do
   ArrayTools:-BlockCopy(M,(i-1)*9,3,3,3,Q,(i-1)*9,9,9,1);
end do:
time[real]()-st;
                             0.170

LinearAlgebra:-Norm(Q-out);
                               0

Dave Linder
Mathematical Software, Maplesoft

@maple fan You can do a rough test, by using the difference between the `time` command and its variant, `time[real]`. The `time` command reports something like a sum of CPU time spent in all (kernel) threads, but `time[real]` reports on wall-clock time.

restart:
M:=Matrix(2000,2000,fill=1.0,datatype=float[8]):

time( M.M );

time[real]( M.M );

No, this does not affect all Maple commands. What is affected is dense, double-precision linear algebra arithmetic and some computations which rely heavily upon that. See the first paragraph of the parent post.

@PatrickT I'm not sure how you meant your comment, exactly, but I should point out to readers that procedure options are not the same thing as their optional arguments. It's a modification in the procedure's definition, and not something that gets passed in a call to the procedure.

> oldsetting:=interface(prettyprint=1):

> f:=proc(x,{method::identical(foo,bar):=bar})
> global g;
> local t;
>    t:=g[method](x);
> end proc;

            f := proc(x, {method::(identical(foo, bar)) := bar})
            local t;
            global g;
              t := g[method](x)
            end proc;

> subsop(3=remember,eval(f));

               proc(x, {method::(identical(foo, bar)) := bar})
               local t;
               option remember;
               global g;
                 t := g[method](x)
               end proc;

Also, a more careful approach is to augment any options already present. The procedures returned by `dsolve/numeric` may not already have any "options", but in general you might not want to clobber existing options when adding new ones.

f:=proc(x,{method::identical(foo,bar):=bar})
option blue;
global g;
local t;
   t:=g[method](x);
end proc;

            f := proc(x, {method::(identical(foo, bar)) := bar})
            local t;
            option blue;
            global g;
              t := g[method](x)
            end proc;

subsop(3=op({op(3,eval(f)),remember}), eval(f));

               proc(x, {method::(identical(foo, bar)) := bar})
               local t;
               option remember, blue;
               global g;
                 t := g[method](x)
               end proc;

interface(prettyprint=oldsetting):

Also, `option remember` for procedure f, by itself without other options, means computed results of calls to f will get stored inside the remember table of f (inside op(4,eval(f)) in fact).

You might want those computed results stored possibly only until the next garbage collection, so that Maple doesn't retain a possibly enormous number of results in memory. That's why you often see the pair, `option system,remember` together. See ?remember and ?option and ?proc for more details.

Dave L

@Alec Mihailovs It seems to be a new problem with Question->Post and conversion, that old Answers/Comments get veiled.

As you mentioned, the subject here is more of a Post than a Question.

Usually, Question->Post and the reverse conversion works just fine. This is a new site problem. I've flagged it with a note, for the site coders. And I've reverted this back to a Question, since that seems to reinstate the old comments (technically, Answers...).

It's good to read you here again.

1 2 3 4 5 6 7 Last Page 1 of 11