acer

32368 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Axel Vogt One of the profiled code snippets in that google groups (usenet) thread performed "porgrammer indexing" of an rtable, by referring to an entry using () round brackets instead of [] square brackets when assigning computed values to entries. I wonder if any of his code relies heavily on this to repeatedly "grow" any rtables (ie. Vector, Matrix, or Array).

When Maple grows an rtable in this way it may have to recreate  and replace it in situ. I believe that it might overallocate by some proportion, which avoids recreation of the underlying structure only up to the amount of extra space allocated. But if that kind of indexing is done heavily (or.. entirely in the worst scenario) then Maple may be producing and disposing of many copies of the rtable (ie. disposable garbage).

Here is an example of the "worst scenario", for comparison, performed in Maple 15 on 64bit Linux.

> restart:
> kernelopts(printbytes=false):

> p:=proc(x::float,V::Vector(datatype=float[8]),N::integer)           
>    local i::integer[4];
>    for i from 1 to N do
>       V(i):=ln(x); # programmer indexing, which can grow V
>    end do;
>    NULL;
> end proc:

> G:=Vector(datatype=float[8]): # created with no entries stored

> CodeTools:-Usage( evalhf(p(3.5, G, 1000000)) ):  
memory used=84.64MiB, alloc change=61.36MiB, cpu time=720.00ms, real time=734.00ms

> G[100];
                               1.25276296849537


> restart:
> kernelopts(printbytes=false):

> p:=proc(x::float,V::Vector(datatype=float[8]),N::integer)           
>    local i::integer[4];
>    for i from 1 to N do
>       V[i]:=ln(x);
>    end do;
>    NULL;
> end proc:

> G:=Vector(1000000,datatype=float[8]):

> CodeTools:-Usage( evalhf(p(3.5, G, 1000000)) ):  
memory used=512 bytes, alloc change=0 bytes, cpu time=130.00ms, real time=134.00ms

> G[100];
                               1.25276296849537

It's just a guess, as to a possible cause of avoidable garbage collection. Better performance in such a case might even be attained by initially creating the rtable with the maximal explicit size that the algorithm might need (if that is feasible in memory).

@Axel Vogt One of the profiled code snippets in that google groups (usenet) thread performed "porgrammer indexing" of an rtable, by referring to an entry using () round brackets instead of [] square brackets when assigning computed values to entries. I wonder if any of his code relies heavily on this to repeatedly "grow" any rtables (ie. Vector, Matrix, or Array).

When Maple grows an rtable in this way it may have to recreate  and replace it in situ. I believe that it might overallocate by some proportion, which avoids recreation of the underlying structure only up to the amount of extra space allocated. But if that kind of indexing is done heavily (or.. entirely in the worst scenario) then Maple may be producing and disposing of many copies of the rtable (ie. disposable garbage).

Here is an example of the "worst scenario", for comparison, performed in Maple 15 on 64bit Linux.

> restart:
> kernelopts(printbytes=false):

> p:=proc(x::float,V::Vector(datatype=float[8]),N::integer)           
>    local i::integer[4];
>    for i from 1 to N do
>       V(i):=ln(x); # programmer indexing, which can grow V
>    end do;
>    NULL;
> end proc:

> G:=Vector(datatype=float[8]): # created with no entries stored

> CodeTools:-Usage( evalhf(p(3.5, G, 1000000)) ):  
memory used=84.64MiB, alloc change=61.36MiB, cpu time=720.00ms, real time=734.00ms

> G[100];
                               1.25276296849537


> restart:
> kernelopts(printbytes=false):

> p:=proc(x::float,V::Vector(datatype=float[8]),N::integer)           
>    local i::integer[4];
>    for i from 1 to N do
>       V[i]:=ln(x);
>    end do;
>    NULL;
> end proc:

> G:=Vector(1000000,datatype=float[8]):

> CodeTools:-Usage( evalhf(p(3.5, G, 1000000)) ):  
memory used=512 bytes, alloc change=0 bytes, cpu time=130.00ms, real time=134.00ms

> G[100];
                               1.25276296849537

It's just a guess, as to a possible cause of avoidable garbage collection. Better performance in such a case might even be attained by initially creating the rtable with the maximal explicit size that the algorithm might need (if that is feasible in memory).

@Angelos58 

Let mA mean membership in A, and mB mean membership in B.

"mA implies mB" is an analog of "A is a subset of B".

"mA or mB" is an analog of "A union B".

"mA and MB" is an analog of "A intersect B".

 

If you are stuck accepting the first of those analogs, then consider the following. (You may take the first and second sentences below as a definition of the term subset.)

Let B be a set and let set A be a subset of B.

Every member of A is also a member of B.

Every member of A is necessarily a member of B.

Any member x of A is necessarily a member of B.

If x is a member of A then x is a member of B.

"x is a member of A" implies "x is a member of B".

Membership in a subset of set B implies membership in set B.

@Angelos58 

Let mA mean membership in A, and mB mean membership in B.

"mA implies mB" is an analog of "A is a subset of B".

"mA or mB" is an analog of "A union B".

"mA and MB" is an analog of "A intersect B".

 

If you are stuck accepting the first of those analogs, then consider the following. (You may take the first and second sentences below as a definition of the term subset.)

Let B be a set and let set A be a subset of B.

Every member of A is also a member of B.

Every member of A is necessarily a member of B.

Any member x of A is necessarily a member of B.

If x is a member of A then x is a member of B.

"x is a member of A" implies "x is a member of B".

Membership in a subset of set B implies membership in set B.

Why do you want your Array "protected" from garbage collection?

Are you using Maple 16? Its memory management system is new and different from previous versions.

So, are you saying that you are not programming in OpenMaple? (In which case this and this are not what you want?)

acer

@hasanhabibul10 

expr:=x^2+1:

x^2*expand(expr/x^2);

                           2 /    1 \
                          x  |1 + --|
                             |     2|
                             \    x /

x^2*frontend(expand,[expr/x^2]);

                           2 /    1 \
                          x  |1 + --|
                             |     2|
                             \    x /

expr:=x^2+cos(a+b):

x^2*expand(expr/x^2);

              2 /    cos(a) cos(b)   sin(a) sin(b)\
             x  |1 + ------------- - -------------|
                |          2               2      |
                \         x               x       /

x^2*frontend(expand,[expr/x^2]);

                       2 /    cos(a + b)\
                      x  |1 + ----------|
                         |         2    |
                         \        x     /

The wrapping call to frontend is in case you want to prevent expansion of things other than of type `+` and `*`. For example, if you have coefficient like cos(a+b) which you want left unexpanded.

@hasanhabibul10 

expr:=x^2+1:

x^2*expand(expr/x^2);

                           2 /    1 \
                          x  |1 + --|
                             |     2|
                             \    x /

x^2*frontend(expand,[expr/x^2]);

                           2 /    1 \
                          x  |1 + --|
                             |     2|
                             \    x /

expr:=x^2+cos(a+b):

x^2*expand(expr/x^2);

              2 /    cos(a) cos(b)   sin(a) sin(b)\
             x  |1 + ------------- - -------------|
                |          2               2      |
                \         x               x       /

x^2*frontend(expand,[expr/x^2]);

                       2 /    cos(a + b)\
                      x  |1 + ----------|
                         |         2    |
                         \        x     /

The wrapping call to frontend is in case you want to prevent expansion of things other than of type `+` and `*`. For example, if you have coefficient like cos(a+b) which you want left unexpanded.

@Kitonum That's right, Mathematica has a different model of floating-point computation, in which the working precision may be raised internally by the system so as to try and meet a requested accuracy.

What the Programming Manual is stating, in essence, is that for compound (composed) operations Maple will not guarantee an accuracy. Any promised (eg. 0.6 ulp) errors from intermediate results can be magnified by the subsequent composed operations. As the Maple Programming Manual page mentions the working precision is usually fixed. And in general Maple does not attempt to analyze an expression and internally raise Digits so as to attain a desired accuracy during floating-point evaluation. Common exceptions are individual arithmetic operations or calls to some special functions with numeric arguments in which case it may promise 0.6 ulps (likely using guard digits, range reduction, or other standard techniques). The present (Maple 16) version of the Programming Manual is one of the first places that I recall seeing this explained in as much detail, although it's been at least hinted at before in some published papers I've seen.

This is an important difference in the working model of floating-point computation of the two systems.

There are some other exceptions such as numeric solvers like fsolve, evalf/Int, dsolve/numeric which have accuracy tolerances. But even they can subject to the behaviour in question when individual expressions (integrands, etc) get evaluated in floating-point at each step.

The general mechanism with which Mathematica attempts to compute expressions to a desired accuracy is by a proprietary variant of interval arithmetic (see also "significance arithmetic"), the precise details of which are not public as far as I know. Prof. Richard Fateman of Berkeley has written some papers which critique this implementation. He and Wolfram developer Daniel Lichtblau have also engaged in lengthy (but polite) arguments about it in the past, in the sci.math.symbolic usenet newsgroup and its mirriors (eg. [1] [2], but you can find others).

I'm sometime surprised that this topic doesn't come up more frequently.

These models of floating-point computation are embedded deeply within these systems.

A reasonable question that can arise is: how high must I set Digits in order to know that the result of a subsequent floating-point evaluation is accurate to a pre-stated number of decimal digits? It's undesirable to raise Digits much higher than is strictly necessary, since that would involve unecessarily slower executation times. On the other hand, it's always possible to construct an example for which Digits must be set higher than any pre-stated value in order for the computation to be correct. (If your habit is to try and gain assurance by setting Digits to, say, 1000 as your test of correctness then I can construct an example where the error is magnified enough that Digits=1050 is the minimum necessary to get requested accuracy.) This is not a foolish question, but in general it may not currently be answerable because it depends on the numerical conditioning of the problem and it not a solved theoretical problem in general as far as I know.

An alternative "surface" approach is sometimes possible in Maple. I call it a "surface" approach since it sits on top of Maple and is not deeply embedded through the system. There is an interval arithmetic package in Maple named evalr, and it is accompanied by a floating-point routine shake. There are extensions of the `evalr` mechanism which understand how to handle quite a few mathematical operations and Maple commands. An alternative approach in Maple is to programatically convert floats to rational and then iteratively increase Digits and test with shake/evalr. This approach is limited to the set of operations that evalr understands. At each iteration Digits could be doubled, or scaled by the golden ratio, or what have you. I have some procedures which implement these ideas, and can dust them off if people are interested. One of them I named `feval`, and tried to make behave a little like `evalf` and 2-argument `eval` but with the described shake mechanism.

acer

 

@Markiyan Hirnyk Thanks for adding those details and reference to Alec's post. That certainly is a terse way to compute such a thing.

The rendering and performance of the Standard GUI really isn't good for the plots:-densityplot command. If I use that command to make a modest size 600x600 image in this way then it takes Maple 16.01 on a reasonably quick Intel i5 about 1.7 sec to generate the plot structure and then about a further 7.2 sec just to render it.

Whatever the GUI is doing (unnecessary interpolation, resampling!?) while trying to render that plot structure is unfortunate.

One can run the attached worksheet using the `!!!` from the GUI's menubar, to see those kind of timings calculated. (It relies on separate execution blocks.)

mandel.mw

The Standard GUI's memory use also creeps up to over 600MB, if I run it a few times. And it becomes very sluggish to respond to certain menu actions like scrolling of the outout on/off screen, or interecting with the plot output using the mouse and context-menus. And I get messages about exhausted Java heap space in the Linux console from which I launched the GUI.

This kind of performance by the Standard GUI was the motivation for a post about converting the COLOR rtable of the densityplot's returned structure to an Array that ImageTools can handle, and which could be better viewed from a Component Label. Or the Array could be appropriately inserted into a PLOT3D structure (which is what ImageTools:-Preview does) with which th GUI performs better. Or, as in the first link of my Answer, an ImageTools Array could be crafted directly and viewed as an image file, bypassing plots:-densityplot altogether.

@Markiyan Hirnyk Thanks for adding those details and reference to Alec's post. That certainly is a terse way to compute such a thing.

The rendering and performance of the Standard GUI really isn't good for the plots:-densityplot command. If I use that command to make a modest size 600x600 image in this way then it takes Maple 16.01 on a reasonably quick Intel i5 about 1.7 sec to generate the plot structure and then about a further 7.2 sec just to render it.

Whatever the GUI is doing (unnecessary interpolation, resampling!?) while trying to render that plot structure is unfortunate.

One can run the attached worksheet using the `!!!` from the GUI's menubar, to see those kind of timings calculated. (It relies on separate execution blocks.)

mandel.mw

The Standard GUI's memory use also creeps up to over 600MB, if I run it a few times. And it becomes very sluggish to respond to certain menu actions like scrolling of the outout on/off screen, or interecting with the plot output using the mouse and context-menus. And I get messages about exhausted Java heap space in the Linux console from which I launched the GUI.

This kind of performance by the Standard GUI was the motivation for a post about converting the COLOR rtable of the densityplot's returned structure to an Array that ImageTools can handle, and which could be better viewed from a Component Label. Or the Array could be appropriately inserted into a PLOT3D structure (which is what ImageTools:-Preview does) with which th GUI performs better. Or, as in the first link of my Answer, an ImageTools Array could be crafted directly and viewed as an image file, bypassing plots:-densityplot altogether.

@Markiyan Hirnyk Another possibility, similar to A but with lower powers, is,

q*Pi^2*( ( 2*ln(1/(kf2+q)*(q+kf1)) - 1 )*q^2
         + 2*ln(1/kf2*(kf2+q))*kf2^2
         + 2*kf1^2*ln(q/(q+kf1))
         + (2*kf2-2*kf1)*q + kf1^2 );


     2 //          q~ + kf1~ \   2                              2
q~ Pi  ||-1 + 2 ln(---------)| q~  + (2 kf2~ - 2 kf1~) q~ + kf1~
       \\          kf2~ + q~ /


            kf2~ + q~      2         2       q~     \
     + 2 ln(---------) kf2~  + 2 kf1~  ln(---------)|
              kf2~                        q~ + kf1~ /

The leading coefficient of 1/2 in result B can be simplified away, leaving a similar form, with the `normal` command.

> normal(B);

     2      2     2                                        2
q~ Pi  (kf1~  - q~  + 2 q~ kf2~ - 2 q~ kf1~ + 2 ln(q~) kf1~

                      2                       2       2
     - 2 ln(kf2~) kf2~  + 2 ln(kf2~ + q~) kf2~  - 2 q~  ln(kf2~ + q~)

                           2       2
     - 2 ln(q~ + kf1~) kf1~  + 2 q~  ln(q~ + kf1~))

As you say, it's a matter of preference, at some point.

@Markiyan Hirnyk Thank you, that is much more terse a set of commands than the path I had taken. It seems to produce an expression a little longer, without some pairs of ln's finally combined, in my Maple 13 or Maple 16. (Perhaps I transcribed your 2D Math code wrongly.) But the important step of converting the acrtanh's is done.

I see now that there are other short ways to get something similar in length to what I had, eg.

combine(simplify(convert(evalc(R),ln)));

I'm not sure if you are also interested in obtaining smaller representations of this expression. (If not, then sorry.)

If `R` is the original expression then (under your stated assumptions on `kf1`, `q`, and `kf2`) I suspect that `R` is equal to,

q*Pi^2*(   kf1^2*ln(q^2/(q+kf1)^2)
         + kf2^2*ln((kf2+q)^2/kf2^2)
         + ln((q+kf1)^2/(kf2+q)^2)*q^2
         - q^2 + 2*q*kf2 - 2*q*kf1 + kf1^2 );

acer

@Markiyan Hirnyk 

Try it with the `seq` line replaced as,

eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..0),L=[2,5,10])});

as per the comment preceding that line.

@Markiyan Hirnyk 

Try it with the `seq` line replaced as,

eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..0),L=[2,5,10])});

as per the comment preceding that line.

First 399 400 401 402 403 404 405 Last Page 401 of 592