acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

The exponent is the second operand of a call to `^`.

> op((x+y)^n);

                                   x + y, n

> op(2,(x+y)^n);

                                       n

> expr:=(x+y)^(-p)+4*x^n*y^m+(x+y)^n+x^3:        

> seq([U,op(2,U)],U in indets(expr,`^`));

            n        m              n              (-p)         3
          [x , n], [y , m], [(x + y) , n], [(x + y)    , -p], [x , 3]

> map2(op,2,indets(expr,identical(x+y)^anything));

                                    {n, -p}

acer

The CoefficientVector or CoefficientList commands will be more efficient than a sequence of calls to `coeff`, for picking off all the coefficients of a dense polynomial of non-small degree.

The reason is that CoefficientVector should walk the polynomial structure just once, while each coeff call in the sequence would also walk the structure (which might be unordered).

By creating `a` as a Vector you can use round-brackets for accessing its entries (which you seem to want to do...).

g := taylor(exp(z), z = 0, 100):

a := PolynomialTools:-CoefficientVector(convert(g,polynom),z):

af := evalf(a):

a(3), a(7);

                             1   1 
                             -, ---
                             2  720

evalf(a(3));

                          0.5000000000

af(7);

                         0.001388888889

If you want to access all the entries as floating-point numbers, then you may as well create Vector `af` rather than have to wrap a(k) with `evalf` every time you want it.

acer

Yes, to your question, "... so you have to install the 32 bit version instead?"

No, to your question, "Is it possible to move the finance.dll-file from Maple16(32-bit) to Maple16(64-bit)?" if you expect it to work with the 64bit Maple.

acer

You can use fdiff to get numeric partial derivatives of B (which acts like a black box function).

And there is an alternate syntax for getting that: evalf@D will call fdiff. (See also the help-page for D).

D[1](B)(17.5,26.0);

                      D[1](B)(17.5, 26.0)

evalf(%);  # hooks into fdiff !

                         0.01950091280

evalf(D[2](B)(5.0,2.0));

                         -5.217525923

Is that good enough for your purposes?

acer

One easy way (amongst several) is,

WM:=LinearAlgebra:-RandomMatrix(2,outputoptions=[shape=diagonal]);

                                     [27   0]
                               WM := [      ]
                                     [ 0  57]


Matrix(4,6,[WM,WM],scan=diagonal);

                           [27   0   0   0  0  0]
                           [                    ]
                           [ 0  57   0   0  0  0]
                           [                    ]
                           [ 0   0  27   0  0  0]
                           [                    ]
                           [ 0   0   0  57  0  0]

acer

@jimmyinhmb There is an ArrayTools:-BlockCopy command, which might help.

As you have figured out, you don't really want to be making individual Maple function-calls to copy each of the columns of the block. Maple function calls are relatively expensive (compared to compiled C). So having BlockCopy call out to a precompiled function (to do the looping over all the columns of the block) is probably better than doing that with interpreted code.

If we dig right down into the internal Library routine LinearAlgebra:-LA_External:-MatrixAdd we can find some define_external calls which set up call_externals to something much like the dscal (NAG f06edf) and daxpy (NAG f06ecf) BLAS functions. In some scenarios you might get a noticable improvement by using the call_external to the undocumented daxpy rather than MatrixAdd.

showstat((LinearAlgebra::LA_External)::MatrixAdd,22..24);

(LinearAlgebra::LA_External):-MatrixAdd := proc(A, B, c1, c2)
local extfns, rowsA, colsA, rowsB, colsB, localA, localB, localc1, localc2, extlib, HWCall, lengthA, DatatypeA, IndexFnA, StorageA, DatatypeB, IndexFnB, StorageB, prefStorageB, kl, ku;
       ...
  22   extfns := LinearAlgebra:-`MatrixAdd/HardwareTable`[IndexFnA,DatatypeA,StorageA,prefStorageB];
  23   HWCall[1] := ExternalCalling:-DefineExternal(extfns[1],extlib);
  24   HWCall[2] := ExternalCalling:-DefineExternal(extfns[2],extlib);
       ...
end proc

showstat((LinearAlgebra::LA_External)::MatrixAdd,57..61);

(LinearAlgebra::LA_External):-MatrixAdd := proc(A, B, c1, c2)
local extfns, rowsA, colsA, rowsB, colsB, localA, localB, localc1, localc2, extlib, HWCall, lengthA, DatatypeA, IndexFnA, StorageA, DatatypeB, IndexFnB, StorageB, prefStorageB, kl, ku;
       ...
  57     if evalf(localc1)-1. <> 0. then
  58       userinfo(1,'LinearAlgebra',"NAG",extfns[1]);
  59       HWCall[1](lengthA,evalf(localc1),localA,0,1)
         end if;
  60     userinfo(1,'LinearAlgebra',"NAG",extfns[2]);
  61     HWCall[2]('N',rowsA,colsA,localB,localA,evalf(localc2))
       ...
end proc

kernelopts(opaquemodules=false):
LinearAlgebra:-`MatrixAdd/HardwareTable`[[],float[8],rectangular,rectangular];
                     [hw_f06edf, hw_f06ecf]

So I suppose that you could use a pair of temp Matrices, T1 and T2, with steps like these:
1) BlockCopy [A21 A22] into T1, and [B21 B22] into T2
2) use the daxpy to add T2 inplace to T1
3) BlockCopy T1 back to [A21 A22]

I have doubts that ArrayTools:-Alias could work with that whole subblock, when it gets passed externally later on. The external function that performs the later addition likely wouldn't have the necessary knowledge about the offset and jumps between successive column-end and column-starts of the subblock. Setting up Aliases for each column of the subblock, as an alternative, might be costly. But whether it is much more expensive than all the block-copying in the scheme above... I don't know. Any special offset into the data to specify the start of the subblock, or any special stride to walk the columns of the subblock, won't help with the jump between subblock columns I think.

And then there is the Compiler. You could write a procedure which accepts the two full Matrices, and all scalar details of the dimensions and block boundaries, and then adds inplace into [A21 A22]. And then run Compiler:-Compile against that procedure. No multicore benefit, and likely less cache-tuning, and no Aliasing. But no extra block-copying.

Without trying out each of these three approaches I am not sure which would be fastest.

I think that this is a good question. The `ArrayTools:-BlockCopy` command was needed precisely because even with all the fanciness of `ArrayTools:-Alias`, and `ArrayTools:-Copy`, etc, it still wasn't possible to get the best speed. Maybe there should be a BlockAdd as well, which calls daxpy in a loop in a precompiled external function.

acer

Your structure SOL is a little awkward, It is a 20x3 Matrix of lists each of length 10. Converting it to either a float[8] 20x10x3 Array (or a listlist) makes working with it a bit easier.

You should be able to get a surface (in Maple 15 or 16, say) with the commands,

Q := Array(1..20,1..10,1..3,(i,j,k)->SOL[i,k][j],datatype=float[8]):

plots:-surfdata(Q, axes = box, labels = ["Mx", "My", "Mz"]);

It should be possible to treat the surface as an inequality constraint, and to answer the question as to which side of it an arbitrary point (in the quadrant) lies. I'm not sure what's a convenient approach there, off the cuff. Maybe rotating theta=-135 , phi=0, so that the surface is a function (not one->many). I'd have to think about it, and sorry wouldn't have time soon. But I'm sure that someone else can get it pretty quickly.

acer

It is a bug in the 2D Math parser.

The first time you paste in that equation as 2D Math input it gets parsed the same as would the 1D (plaintext) Maple notation statement,

  `&+-`(2*sqrt(tan(FAZE)^2)) = `&+-`(sqrt(4*tan(FAZE)^2))

Notice that the left-hand-side is the application of the inert `&+-` operator upon the product of 2 and sqrt(tan(FAZE)^2).

Now if I paste in once more the very same 2D Math input then it gets parsed the same as would the 1D statement,

  `&+-`(2)*sqrt(tan(FAZE)^2) = `&+-`(sqrt(4*tan(FAZE)^2))

The difference is that the second time it gets parsed as sqrt(tan(FAZE)^2) multiplied by the application of the `&+-` operator to (only) 2. That difference in parsing of the very same pasted 2D Math input is in itself a bug. I believe that I see it in both Maple 12.02 and Maple 16.01 on Windows.

Now, the operator `&+-` is, by default, inert. Which means that it does actually do anything. (...unless you load Tolerances, but that's not the case here.) And Maple doesn't know much, if anything, about what it means to you mathematically. In particular, it doesn't compute that `&+-`(a)*b is equal to `&+-`(a*b). Hence the first time the relation-test using `is` returns true, and the second time it returns false.

I'll submit a bug report.

acer

That utility exists, in a slightly more general and useful form, as the command type. Another variant is verify.

Eg,

p:=evalf(Int(g(x),x=0..1)):

type(eval(p,1),numeric);

                             false

p:=evalf(Int(exp(x),x=0..1)):

type(eval(p,1),numeric);

                              true

For some discussion, see this post and parent thread.

acer

I think I see a little of what's gone wrong here. It is the same in Maple 15.01 and 16.01.

The problem is related to setting the plots:-setoptions to gridlines=true. That affects the axis settings in subsequent calls to plots:-display, and it appears to clobber the mode=log axis setting incurred for each logplot. (A bug, I suspect...)

In 15.01 or 16.01, on Windows 7, the first plot below is a successful logplot from plots:-display. The second plot below is like what the Asker has done, and is not a successful logplot from plots:-display. The third plot below is a workaround which gets a successful logplot from plots:-display, but which has explicit use of the gridlines option instead of using plots:-setoptions to control that aspect.

restart:
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]));

restart:
plots[setoptions](gridlines=true):
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]));

A workaround, to get the log mode on axis[2] along with gridlines, is,

restart:
plots:-display(plots:-logplot([[1, 1], [2, 8], [3, 27]]),gridlines=true);

acer

The structures you've named are indexable.

You should probably choose according to the kind of task at hand. There is a Portal page which describes some basic features of such structures -- but it is organized by structure rather than by quality.

I'll try and illustrate with some basic qualities. It's tricky, because the whole topic is complex and convoluted; the description always seems to double back to earlier references.

Mutability: an element can be changed, by indexed assignment to that element. The `table` and `rtable` have this quality. The `list` fakes it, and indexed assignment to an entry causes the Maple kernel to simply replace the whole list with the new list, which is quite inefficient.

Uniqueness: each element is unique, and duplicates are removed (or ignored) at creation time. The `set` has this.

Growability: the `table` structure has always had this. And for the past few releases all the `rtable` flavours (Array, Matrix, Vector) do as well, on account of so-called programmer indexing [ref. 1, 2].

Indexing by non-integer values: the `table` structure allows this, where elements can be indexed by name. Eg. a `table` T can have an entry indexed as T[p] for unassigned name `p`.

If you don't need to "grow" your structure (because you know in advance how many elements you need to provide values for), and if you don't need to reassign any of the elements after initial creation of the whole structure, then a  `list` is not a bad choice. It's construction syntax is easy and terse. Maple's memory management (both creation, and garbage collection) of small lists is fast.

For very large numeric data sets the hardware datatypes (eg. float[8], integer[4]) of rtable are memory efficient. For LinearAlgebra, computational Statistics, and mapping, zipping, etc, these can be relatively much more computationally efficient. They can often be used by precompiled external functions which can be passed the hardware precision data in memory (uncopied, by location).

There are other qualities, harder to explain quickly. For example, the nature of evaluation of the structures and their contents, especially as arguments or locals of procedures. Some structures, like `table`, are of type last_name_eval.

There are some other, more rare data structures used in Maple. The `Record` can have its elements indexed by non-numeric name, but is of fixed size. The unevaluated procedure call (like a `PLOT` structure) is not indexable, and people often use `op` to pick it apart. There is a new structure called `object`, which brings a few new qualities related to OOP. Most of these are not the kind of thing you are talking about, but we can keep their existence in mind because every now and then one of them might better suit a new task.

Of course it's always possible to choose a suitable structure and then abuse it with undesirable programming practices. One of the classic bad (but common!) techniques is to build up lists by repeated concatenation such as L:=[new,op(L)] done for each new element, over and over in a loop. That is O(n^2) in memory, while a [seq(...)] construction might be O(n) complexity. And so one scales poorly, as the size `n` of the problem increases. The `if` operator can quite often allow for constuction of lists via `seq` even when element admittance is restricted by some complicated conditional.

Maple 16 has a new mechanism for data type coercion. This is great news for routines which can take an argument as either Vector[row] or mx1 Matrix or 1D Array. Or for similar situations where the structures could be converted in-place without any need to copy the data. But I am worried that allowing this feature to be applied to all routines which accept, say, Vector and `list` interchangably, just for ease of use, will lead to lots more efficient usage patterns by Maple users on account of unnecessary copying.

You have made a good start with Maple, in asking these questions.

acer

See here, for some discussion of how memory addresses of the terms in a product (of symbolic terms) can affect their ordering within that product. In turn, that can lead to differences during numerical evaluation. The same is true for addition (which I keep meaning to describe in a post...).

The 18 decimal digits in your result may be the result of conversion from HFloat, in which case the differences you notice would be only a few ulps (in HFloat).

The differences you note look much smaller than what would be dictated by the default accuracy tolerance used in your pdsolve/numeric calls. If the result is only supposed to be accurate in a much more coarse way, then why is this smaller difference interesting? If the only bits that vary are ones that arenot even close to being guaranteed, then what's the matter?

So, in terms of pdsolve,numeric per se this question does not see very interesting. It may relate to more general questions about numerics and ordering issues in Maple, but that is a topic better covered with much clearer and smaller examples (arithmetic, say) so that we can see precisely what devitates and whem.

acer

In the fourth DE there is derivative of f, which looks like it should be f(y) instead of just f.

In your Question, you wrote dsolve({sys,bcs}type=numeric) which is missing a comma just before the word `type`.

Pasting plaintext code here means using 1D Maple Notation for input instead of 2D Math input. (That's actually separate from whether you use a Worksheet or a Document).

acer

There are three common ways to get around premature evaluation of the arguments.

One way is to use a procedure which returns unevaluated if the arguments do not satisfy some type requirements, eg. all numeric, say.

Sometimes creating such a procedure can be made easier, by using the `unapply` command with its 'numeric' option. You may or may not find that helpful here.

Another way is to use unevaluation quotes, as Georgios suggested. These first two ways involve the expression-form calling sequence, where the variable ranges are each given like name=range(numeric).

A third way is to use the operator-form calling sequence, for which the various ranges of integration (or fsolving, or plotting, or Optimizing) are just range(numeric) instead of being name=range(numeric). Unfortunately this does not seem to work for the Compiled multivariate proc, unless it is wrapped in yet another proc.

f:= proc(kF1, kF2, m, k, omega, q)
      kF1+kF2+m+k+omega+q;
    end proc:

# First, using the original procedure `f`.

evalf(Int('f'(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6])); # Georgios
                          7560.000000

evalf(Int(f,[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

evalf(Int((q,r,s,t,u,v)->f(q,r,s,t,u,v),[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

# Now, using Compiled procedure `cf`.

cf:=Compiler:-Compile(f):

evalf(Int('cf'(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6])); # Georgios
                          7560.000000

CF:=unapply('cf'(q,r,s,t,u,v),[q,r,s,t,u,v],numeric):
evalf(Int(CF(q,r,s,t,u,v),[q=0..1,r=0..2,s=0..3,t=0..4,u=0..5,v=0..6]));
                          7560.000000

evalf(Int(cf,[0..1,0..2,0..3,0..4,0..5,0..6])); # a pity
Error, (in evalf/int) incompatible dimensions

# The following workaround (to the "pity" above) incurs the cost of an extra
# function call (of which there may be many, for multidimensional integration!).
evalf(Int((q,r,s,t,u,v)->cf(q,r,s,t,u,v),[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

And what's life without a few kludges to make it interesting. Somehow evalf/Int is poking at the operator-form integrand and deciding that it doesn't demand the same number of parameters as there have been supplied numeric ranges of integration. We could try and force the operator-form to work, but without having to wrap it in another (potentially costly) procedure call.

newcf := FromInert(subsindets(ToInert(eval(cf)),
                     specfunc(anything,_Inert_PARAMSEQ),
                     t->op(indets(ToInert(eval(f)),
                                  specfunc(anything,_Inert_PARAMSEQ))))):

evalf(Int(newcf,[0..1,0..2,0..3,0..4,0..5,0..6]));
                          7560.000000

acer

Of course you could try and compute the higher derivatives numerically, after calling dsolve/numeric. You could do such numerical approximation in several ways: `fdiff`, or finite differences, or applying to interpolant polynomials, etc. But I doubt that such approaches are best.

It might be better to let dsolve/numeric itself handle the whole computation instead. Apart from other considerations, that way it might strive to obtain for you approximations for all your desired outputs that satisfy the same tolerances. It can be done by augmenting your system with new targets, which happen to equal the higher derivatives which you want.

A simple example,

restart:

sys,IC := {diff(x(t),t)=x(t)*t^3}, {x(0)=1}:

sol:=dsolve( sys union IC
             union {d2x(t)=diff(x(t),t,t), D(x)(0)=0,
                    d3x(t)=diff(x(t),t,t,t), (D@@2)(x)(0)=0},
             numeric, output=listprocedure ):

X:=eval(x(t),sol):
DX:=eval(diff(x(t),t),sol):
D2X:=eval(d2x(t),sol):
D3X:=eval(d3x(t),sol):

#plot([X,DX,D2X,D3X], 0..2, view=0..10, legend=[X,DX,D2X,D3X]);

# How can we deduce those initial conditions for D(x)(0) and (D@@2)(x)(0) ?

newIC := IC union convert(eval(sys, [op(IC), t=0]),D):
eq2:=map(diff,diff(x(t),t)=x(t)*t^3,t):
newIC := newIC union convert(eval({eq2},[x(0)=1,t=0]),D);
            newIC := {x(0) = 1, D(x)(0) = 0, @@(D, 2)(x)(0) = 0}

acer

First 263 264 265 266 267 268 269 Last Page 265 of 336