acer

32505 Reputation

29 Badges

20 years, 11 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

@jimmyinhmb There is an ArrayTools:-BlockCopy command, which might help.

As you have figured out, you don't really want to be making individual Maple function-calls to copy each of the columns of the block. Maple function calls are relatively expensive (compared to compiled C). So having BlockCopy call out to a precompiled function (to do the looping over all the columns of the block) is probably better than doing that with interpreted code.

If we dig right down into the internal Library routine LinearAlgebra:-LA_External:-MatrixAdd we can find some define_external calls which set up call_externals to something much like the dscal (NAG f06edf) and daxpy (NAG f06ecf) BLAS functions. In some scenarios you might get a noticable improvement by using the call_external to the undocumented daxpy rather than MatrixAdd.

showstat((LinearAlgebra::LA_External)::MatrixAdd,22..24);

(LinearAlgebra::LA_External):-MatrixAdd := proc(A, B, c1, c2)
local extfns, rowsA, colsA, rowsB, colsB, localA, localB, localc1, localc2, extlib, HWCall, lengthA, DatatypeA, IndexFnA, StorageA, DatatypeB, IndexFnB, StorageB, prefStorageB, kl, ku;
       ...
  22   extfns := LinearAlgebra:-`MatrixAdd/HardwareTable`[IndexFnA,DatatypeA,StorageA,prefStorageB];
  23   HWCall[1] := ExternalCalling:-DefineExternal(extfns[1],extlib);
  24   HWCall[2] := ExternalCalling:-DefineExternal(extfns[2],extlib);
       ...
end proc

showstat((LinearAlgebra::LA_External)::MatrixAdd,57..61);

(LinearAlgebra::LA_External):-MatrixAdd := proc(A, B, c1, c2)
local extfns, rowsA, colsA, rowsB, colsB, localA, localB, localc1, localc2, extlib, HWCall, lengthA, DatatypeA, IndexFnA, StorageA, DatatypeB, IndexFnB, StorageB, prefStorageB, kl, ku;
       ...
  57     if evalf(localc1)-1. <> 0. then
  58       userinfo(1,'LinearAlgebra',"NAG",extfns[1]);
  59       HWCall[1](lengthA,evalf(localc1),localA,0,1)
         end if;
  60     userinfo(1,'LinearAlgebra',"NAG",extfns[2]);
  61     HWCall[2]('N',rowsA,colsA,localB,localA,evalf(localc2))
       ...
end proc

kernelopts(opaquemodules=false):
LinearAlgebra:-`MatrixAdd/HardwareTable`[[],float[8],rectangular,rectangular];
                     [hw_f06edf, hw_f06ecf]

So I suppose that you could use a pair of temp Matrices, T1 and T2, with steps like these:
1) BlockCopy [A21 A22] into T1, and [B21 B22] into T2
2) use the daxpy to add T2 inplace to T1
3) BlockCopy T1 back to [A21 A22]

I have doubts that ArrayTools:-Alias could work with that whole subblock, when it gets passed externally later on. The external function that performs the later addition likely wouldn't have the necessary knowledge about the offset and jumps between successive column-end and column-starts of the subblock. Setting up Aliases for each column of the subblock, as an alternative, might be costly. But whether it is much more expensive than all the block-copying in the scheme above... I don't know. Any special offset into the data to specify the start of the subblock, or any special stride to walk the columns of the subblock, won't help with the jump between subblock columns I think.

And then there is the Compiler. You could write a procedure which accepts the two full Matrices, and all scalar details of the dimensions and block boundaries, and then adds inplace into [A21 A22]. And then run Compiler:-Compile against that procedure. No multicore benefit, and likely less cache-tuning, and no Aliasing. But no extra block-copying.

Without trying out each of these three approaches I am not sure which would be fastest.

I think that this is a good question. The `ArrayTools:-BlockCopy` command was needed precisely because even with all the fanciness of `ArrayTools:-Alias`, and `ArrayTools:-Copy`, etc, it still wasn't possible to get the best speed. Maybe there should be a BlockAdd as well, which calls daxpy in a loop in a precompiled external function.

acer

Your structure SOL is a little awkward, It is a 20x3 Matrix of lists each of length 10. Converting it to either a float[8] 20x10x3 Array (or a listlist) makes working with it a bit easier.

You should be able to get a surface (in Maple 15 or 16, say) with the commands,

Q := Array(1..20,1..10,1..3,(i,j,k)->SOL[i,k][j],datatype=float[8]):

plots:-surfdata(Q, axes = box, labels = ["Mx", "My", "Mz"]);

It should be possible to treat the surface as an inequality constraint, and to answer the question as to which side of it an arbitrary point (in the quadrant) lies. I'm not sure what's a convenient approach there, off the cuff. Maybe rotating theta=-135 , phi=0, so that the surface is a function (not one->many). I'd have to think about it, and sorry wouldn't have time soon. But I'm sure that someone else can get it pretty quickly.

acer

It is a bug in the 2D Math parser.

The first time you paste in that equation as 2D Math input it gets parsed the same as would the 1D (plaintext) Maple notation statement,

  `&+-`(2*sqrt(tan(FAZE)^2)) = `&+-`(sqrt(4*tan(FAZE)^2))

Notice that the left-hand-side is the application of the inert `&+-` operator upon the product of 2 and sqrt(tan(FAZE)^2).

Now if I paste in once more the very same 2D Math input then it gets parsed the same as would the 1D statement,

  `&+-`(2)*sqrt(tan(FAZE)^2) = `&+-`(sqrt(4*tan(FAZE)^2))

The difference is that the second time it gets parsed as sqrt(tan(FAZE)^2) multiplied by the application of the `&+-` operator to (only) 2. That difference in parsing of the very same pasted 2D Math input is in itself a bug. I believe that I see it in both Maple 12.02 and Maple 16.01 on Windows.

Now, the operator `&+-` is, by default, inert. Which means that it does actually do anything. (...unless you load Tolerances, but that's not the case here.) And Maple doesn't know much, if anything, about what it means to you mathematically. In particular, it doesn't compute that `&+-`(a)*b is equal to `&+-`(a*b). Hence the first time the relation-test using `is` returns true, and the second time it returns false.

I'll submit a bug report.

acer

That utility exists, in a slightly more general and useful form, as the command type. Another variant is verify.

Eg,

p:=evalf(Int(g(x),x=0..1)):

type(eval(p,1),numeric);

                             false

p:=evalf(Int(exp(x),x=0..1)):

type(eval(p,1),numeric);

                              true

For some discussion, see this post and parent thread.

acer

I think I see a little of what's gone wrong here. It is the same in Maple 15.01 and 16.01.

The problem is related to setting the plots:-setoptions to gridlines=true. That affects the axis settings in subsequent calls to plots:-display, and it appears to clobber the mode=log axis setting incurred for each logplot. (A bug, I suspect...)

In 15.01 or 16.01, on Windows 7, the first plot below is a successful logplot from plots:-display. The second plot below is like what the Asker has done, and is not a successful logplot from plots:-display. The third plot below is a workaround which gets a successful logplot from plots:-display, but which has explicit use of the gridlines option instead of using plots:-setoptions to control that aspect.

restart:
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]));

restart:
plots[setoptions](gridlines=true):
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]));

A workaround, to get the log mode on axis[2] along with gridlines, is,

restart:
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]),gridlines=true);

acer

The structures you've named are indexable.

You should probably choose according to the kind of task at hand. There is a Portal page which describes some basic features of such structures -- but it is organized by structure rather than by quality.

I'll try and illustrate with some basic qualities. It's tricky, because the whole topic is complex and convoluted; the description always seems to double back to earlier references.

Mutability: an element can be changed, by indexed assignment to that element. The `table` and `rtable` have this quality. The `list` fakes it, and indexed assignment to an entry causes the Maple kernel to simply replace the whole list with the new list, which is quite inefficient.

Uniqueness: each element is unique, and duplicates are removed (or ignored) at creation time. The `set` has this.

Growability: the `table` structure has always had this. And for the past few releases all the `rtable` flavours (Array, Matrix, Vector) do as well, on account of so-called programmer indexing [ref. 1, 2].

Indexing by non-integer values: the `table` structure allows this, where elements can be indexed by name. Eg. a `table` T can have an entry indexed as T[p] for unassigned name `p`.

If you don't need to "grow" your structure (because you know in advance how many elements you need to provide values for), and if you don't need to reassign any of the elements after initial creation of the whole structure, then a  `list` is not a bad choice. It's construction syntax is easy and terse. Maple's memory management (both creation, and garbage collection) of small lists is fast.

For very large numeric data sets the hardware datatypes (eg. float[8], integer[4]) of rtable are memory efficient. For LinearAlgebra, computational Statistics, and mapping, zipping, etc, these can be relatively much more computationally efficient. They can often be used by precompiled external functions which can be passed the hardware precision data in memory (uncopied, by location).

There are other qualities, harder to explain quickly. For example, the nature of evaluation of the structures and their contents, especially as arguments or locals of procedures. Some structures, like `table`, are of type last_name_eval.

There are some other, more rare data structures used in Maple. The `Record` can have its elements indexed by non-numeric name, but is of fixed size. The unevaluated procedure call (like a `PLOT` structure) is not indexable, and people often use `op` to pick it apart. There is a new structure called `object`, which brings a few new qualities related to OOP. Most of these are not the kind of thing you are talking about, but we can keep their existence in mind because every now and then one of them might better suit a new task.

Of course it's always possible to choose a suitable structure and then abuse it with undesirable programming practices. One of the classic bad (but common!) techniques is to build up lists by repeated concatenation such as L:=[new,op(L)] done for each new element, over and over in a loop. That is O(n^2) in memory, while a [seq(...)] construction might be O(n) complexity. And so one scales poorly, as the size `n` of the problem increases. The `if` operator can quite often allow for constuction of lists via `seq` even when element admittance is restricted by some complicated conditional.

Maple 16 has a new mechanism for data type coercion. This is great news for routines which can take an argument as either Vector[row] or mx1 Matrix or 1D Array. Or for similar situations where the structures could be converted in-place without any need to copy the data. But I am worried that allowing this feature to be applied to all routines which accept, say, Vector and `list` interchangably, just for ease of use, will lead to lots more efficient usage patterns by Maple users on account of unnecessary copying.

You have made a good start with Maple, in asking these questions.

acer

See here, for some discussion of how memory addresses of the terms in a product (of symbolic terms) can affect their ordering within that product. In turn, that can lead to differences during numerical evaluation. The same is true for addition (which I keep meaning to describe in a post...).

The 18 decimal digits in your result may be the result of conversion from HFloat, in which case the differences you notice would be only a few ulps (in HFloat).

The differences you note look much smaller than what would be dictated by the default accuracy tolerance used in your pdsolve/numeric calls. If the result is only supposed to be accurate in a much more coarse way, then why is this smaller difference interesting? If the only bits that vary are ones that arenot even close to being guaranteed, then what's the matter?

So, in terms of pdsolve,numeric per se this question does not see very interesting. It may relate to more general questions about numerics and ordering issues in Maple, but that is a topic better covered with much clearer and smaller examples (arithmetic, say) so that we can see precisely what devitates and whem.

acer

In the fourth DE there is derivative of f, which looks like it should be f(y) instead of just f.

In your Question, you wrote dsolve({sys,bcs}type=numeric) which is missing a comma just before the word `type`.

Pasting plaintext code here means using 1D Maple Notation for input instead of 2D Math input. (That's actually separate from whether you use a Worksheet or a Document).

acer

There are three common ways to get around premature evaluation of the arguments.

One way is to use a procedure which returns unevaluated if the arguments do not satisfy some type requirements, eg. all numeric, say.

Sometimes creating such a procedure can be made easier, by using the `unapply` command with its 'numeric' option. You may or may not find that helpful here.

Another way is to use unevaluation quotes, as Georgios suggested. These first two ways involve the expression-form calling sequence, where the variable ranges are each given like name=range(numeric).

A third way is to use the operator-form calling sequence, for which the various ranges of integration (or fsolving, or plotting, or Optimizing) are just range(numeric) instead of being name=range(numeric). Unfortunately this does not seem to work for the Compiled multivariate proc, unless it is wrapped in yet another proc.

f:= proc(kF1, kF2, m, k, omega, q)
      kF1+kF2+m+k+omega+q;
    end proc:

# First, using the original procedure `f`.

evalf(Int('f'(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6])); # Georgios
                          7560.000000

evalf(Int(f,[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

evalf(Int((q,r,s,t,u,v)->f(q,r,s,t,u,v),[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

# Now, using Compiled procedure `cf`.

cf:=Compiler:-Compile(f):

evalf(Int('cf'(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6])); # Georgios
                          7560.000000

CF:=unapply('cf'(q,r,s,t,u,v),[q,r,s,t,u,v],numeric):
evalf(Int(CF(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6]));
                          7560.000000

evalf(Int(cf,[0..1,0..2,0..3,0..4,0..5,0..6])); # a pity
Error, (in evalf/int) incompatible dimensions

# The following workaround (to the "pity" above) incurs the cost of an extra
# function call (of which there may be many, for multidimensional integration!).
evalf(Int((q,r,s,t,u,v)->cf(q,r,s,t,u,v),[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

And what's life without a few kludges to make it interesting. Somehow evalf/Int is poking at the operator-form integrand and deciding that it doesn't demand the same number of parameters as there have been supplied numeric ranges of integration. We could try and force the operator-form to work, but without having to wrap it in another (potentially costly) procedure call.

newcf := FromInert(subsindets(ToInert(eval(cf)),
                     specfunc(anything,_Inert_PARAMSEQ),
                     t->op(indets(ToInert(eval(f)),
                                  specfunc(anything,_Inert_PARAMSEQ))))):

evalf(Int(newcf,[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

acer

Of course you could try and compute the higher derivatives numerically, after calling dsolve/numeric. You could do such numerical approximation in several ways: `fdiff`, or finite differences, or applying to interpolant polynomials, etc. But I doubt that such approaches are best.

It might be better to let dsolve/numeric itself handle the whole computation instead. Apart from other considerations, that way it might strive to obtain for you approximations for all your desired outputs that satisfy the same tolerances. It can be done by augmenting your system with new targets, which happen to equal the higher derivatives which you want.

A simple example,

restart:

sys,IC := {diff(x(t),t)=x(t)*t^3}, {x(0)=1}:

sol:=dsolve( sys union IC
             union {d2x(t)=diff(x(t),t,t), D(x)(0)=0,
                    d3x(t)=diff(x(t),t,t,t), (D@@2)(x)(0)=0},
             numeric, output=listprocedure ):

X:=eval(x(t),sol):
DX:=eval(diff(x(t),t),sol):
D2X:=eval(d2x(t),sol):
D3X:=eval(d3x(t),sol):

#plot([X,DX,D2X,D3X], 0..2, view=0..10, legend=[X,DX,D2X,D3X]);

# How can we deduce those initial conditions for D(x)(0) and (D@@2)(x)(0) ?

newIC := IC union convert(eval(sys, [op(IC), t=0]),D):
eq2:=map(diff,diff(x(t),t)=x(t)*t^3,t):
newIC := newIC union convert(eval({eq2},[x(0)=1,t=0]),D);
            newIC := {x(0) = 1, D(x)(0) = 0, @@(D, 2)(x)(0) = 0}

acer

It's not clear whether you want to augment with all zeroes, or with specific data.

If you just want to create a larger Matrix whose entries in rows/columns 1..4 are the original M, then it is easy to do it in several ways. Here are two:

restart:
with(LinearAlgebra):
M:=RandomMatrix(4,4);

N:=copy(M):
N(7,7):=0:
N;

restart:
with(LinearAlgebra):
M:=RandomMatrix(4,4);

N:=Matrix(7,7,M);

Of course, you could easily modify either of those so that the result was M, and not N.

acer

The ArrayTools:-Copy command can be used to assign to the diagonal of a rectangular storage Matrix efficiently.

M:=LinearAlgebra:-RandomMatrix(3,2);

                                    [ 13  92]
                                    [       ]
                               M := [-51  59]
                                    [       ]
                                    [-65  20]

V:=LinearAlgebra:-RandomVector[row](2); # or column Vector

                                V := [-48, 0]

# Acts on newM. Use M if you want M overwritten.
newM:=copy(M):

ArrayTools:-Copy(min(3,2),V,newM,0,3+1);

newM, M;

                            [-48  92]  [ 13  92]
                            [       ]  [       ]
                            [-51   0], [-51  59]
                            [       ]  [       ]
                            [-65  20]  [-65  20]

You can also create a procedure to do it. Here's one version, with boilerplate to try and make it robust for mixed datatypes and storage orders.

replacediag := proc( A::Matrix(storage=rectangular),
                     B::Vector(storage=rectangular),
                     {inplace::truefalse:=false} )
local Anew, Bnew, m, n;
  if inplace then Anew := A; else Anew := copy(A); end if;
  if rtable_options(B,':-datatype')<>rtable_options(A,':-datatype') then
    Bnew := Vector(B,':-datatype'=rtable_options(A,':-datatype'));
  else Bnew := B;
  end if;
  (m,n) := LinearAlgebra:-Dimensions(Anew);
  ArrayTools:-Copy(min(m,n), Bnew, Anew, 0,
    `if`(rtable_options(Anew,':-order')=':-Fortran_order', m+1, n+1));
  Anew;
end proc:


replacediag(M,V), M; # M is not overwritten. Result is a new Matrix.

                            [-48  92]  [ 13  92]
                            [       ]  [       ]
                            [-51   0], [-51  59]
                            [       ]  [       ]
                            [-65  20]  [-65  20]

replacediag(M,V,inplace), M; # M is overwritten. This saves time/memory.

                            [-48  92]  [-48  92]
                            [       ]  [       ]
                            [-51   0], [-51   0]
                            [       ]  [       ]
                            [-65  20]  [-65  20]

acer

I did something like that here, embedding "plot" axes onto an image.

The basic idea was to produce an empty plot containing just the axes (or in your case, a textplot without axes perhaps), export that as an image, read in that image using ImageTools, and finally overlay it onto your other (background) image. The overlaying is done just by overwriting the Array entries -- copying the overlay image's Array's nonzero entries onto the background image's Array.

There was some trickiness with adjusting for whitespace border around the exported plot. The referenced link might help with that.

acer

I changed val_M0 to vec_M0 (which you probably had in your original, a transcription error, perhaps).

That allowed the posted code to produce the nonreal result,

fsolve({mat_M_V[2,1]=0,mat_M_V[1,2]=0,Im(mat_M_V[1,1])=0,Im(mat_M_V[2,2])=0},
       {alpha11=0,alpha22=0,alpha12=0,theta=0},'fulldigits');

           {theta = -1.666216272 + 0.4818505195 I, alpha11 = -1.570796327, 

            alpha12 = -9.424777961, alpha22 = 3.141592654}

So then I specified a real range for theta, and got (with or without option 'fulldigits'),

fsolve({mat_M_V[2,1]=0,mat_M_V[1,2]=0,Im(mat_M_V[1,1])=0,Im(mat_M_V[2,2])=0},
       {alpha11=0,alpha12=0,alpha22=0,theta=0},theta=-infinity..infinity);

             {theta = 3.997794044, alpha11 = -1.570796327, 

              alpha12 = 1.570796327, alpha22 = -1.570796327}

Interestingly, 1.570796327 = Pi/2, which makes me wonder whether some exact approach is also possible (before assigning float values to parameters).

acer

This is an important question. It's actually one of the most basic instances of a more general set of computational tasks, for which dsolve/numeric has been augmented with a whole slew of facilities. The functionality is known as Events.

The basic notion is like this: a rootfinder (like fsolve, etc) operating on the result from a call to dsolve/numeric is acting a bit in the dark. It needs to be told special information about the dependent functions (like x(t) or y(t) say) in order to get the best chance of finding the desired root. The seperate rootfinder doesn't even know that the solution exists (which can be helpful knowlegde). It doesn't know the range, or the stiffness, etc, that characterizes the curve.  But dsolve/numeric itself is computing the solved approximations to the dependent functions, so it is in the best position to be able to invert that computation.

In practice, using dsolve/numeric with "events" can get such inversion results using less resources (memory and time), and quite often more accurately.

The accuracy issue is subtle. The accuracy of the dsolve/numeric solution is (generally) only as good as what is specified by its abserr and relerr tolerances. Suppose that you obtain a procedure from dsolve/numeric, which computes y(t) for any given value of `t`. Bumping up Digits, so as to try the separate rootfinding (eg. fsolve) to obtain a high accuracy value of the independent variable `t` that satisfies y(t)=K for a given numeric value K, is not going to get you a more accurate answer. The rootfinding results are only as good as the interpolating spline that dsolve/numeric uses to compute y(t) in the first place.

So (I feel that) there is some risk in using a separate rootfinder like fsolve, of being beguiled into believing that one has control over the accuracy of the computed roots.

It's a little more work to set up, for involved examples, but it can pay off for larger and more difficult or computationally intensive projects.

Here is an illustration, using Markiyan's example. I've also used output=listprocedure, because that makes subsequent computation of x(t) and y(t) easier (without needing `op`).

restart:

sys := (D(x))(t) = y(t)-x(t), (D(y))(t) = x(t)*(D(x))(t)-2*y(t):
ic := x(0) = 1, y(0) = -1:

# Default for nonstiff rfk45 is abserr=1e-7, relerr=1e-6
sol := dsolve({ic, sys}, numeric
              , parameters=[Py]
              , events=[[y(t)-Py,halt]]
              , output=listprocedure):

solt:=eval(t,sol):
solx:=eval(x(t),sol):
soly:=eval(y(t),sol):

interface(warnlevel=0):

solt(parameters=[-0.5]):
ans := CodeTools:-Usage( solt(1000) );

memory used=213.85KiB, alloc change=0 bytes, cpu time=15.00ms, real time=12.00ms
                          ans := 0.623905007259310

soly(ans);

                             -0.500000000000000

restart:

sys := (D(x))(t) = y(t)-x(t), (D(y))(t) = x(t)*(D(x))(t)-2*y(t):
ic := x(0) = 1, y(0) = -1:

# Default for nonstiff rfk45 is abserr=1e-7, relerr=1e-6
sol := dsolve({ic, sys}, numeric
              , parameters=[Px,Py]
              , events=[[x(t)-Px,halt],[y(t)-Py,halt]]
              , output=listprocedure):

solt:=eval(t,sol):
solx:=eval(x(t),sol):
soly:=eval(y(t),sol):

interface(warnlevel=0):

solt(parameters=[1e10, -0.5]):

ans := CodeTools:-Usage( solt(1000) );

memory used=216.88KiB, alloc change=127.98KiB, cpu time=0ns, real time=12.00ms
                          ans := 0.623905007259310

soly(ans);

                             -0.500000000000000

solt(parameters=[0.95, 1e10]):
ans := CodeTools:-Usage( solt(1000) );

memory used=31.67KiB, alloc change=0 bytes, cpu time=0ns, real time=2.00ms
                          ans := 0.0253259108029954

solx(ans);

                              0.950000000000000

Here's a worksheet, comparing using Events and fsolve for this example. Note both memory allocation as well as speed differences. This is just expository and, being based on a single example, is not supposed to prove anything. For one's own serious projects, it'd be best to try out the alternatives.

dsolveinv0.mw

acer

First 265 266 267 268 269 270 271 Last Page 267 of 337