acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

The two-argument way of using the `eval` command, along with the `seq` command, can be useful for this kind of manipulation.

For example,

sol:=[fsolve(x^12-x-4,{x})];

            [{x = -1.093000628}, {x = 1.146281083}]

form:=sin(x)-x;

                           sin(x) - x

seq( eval(form,S), S in sol );

                  0.2049899606, -0.2350425820

seq( eval([x,form],S), S in sol );

   [-1.093000628, 0.2049899606], [1.146281083, -0.2350425820]

seq( eval(x,S), S in sol );

                   -1.093000628, 1.146281083

sort(sol, (a,b)->rhs(a[1])>rhs(b[1]) );

            [{x = 1.146281083}, {x = -1.093000628}]

I'll also offer the opinion that using `assign` to make the name `x` be assigned a particular value in a solution from `fsolve` (or `solve`) is generally a more awkward method.

acer

I don't see why you would create N0 and N1 as operators. But, as operators or expressions, it is easy to accomplish using the map command and `.` to get the outer product.

restart:

N0:=zeta->Matrix([[-2*(1-zeta)*(zeta-1/2)],[4*zeta*(1-zeta)],[(2*zeta)*(zeta-1/2)]])^%T:
N1:=zeta->N0(zeta)^%T:
map(int,N1(zeta).N0(zeta),zeta);

restart:

N0:=Matrix([[-2*(1-zeta)*(zeta-1/2)],[4*zeta*(1-zeta)],[(2*zeta)*(zeta-1/2)]])^%T:
N1:=N0^%T:
map(int,N1.N0,zeta);

acer

This routine by Joe seems to fix it.

Simply run that in a worksheet, calling it with the full name to your problem file (as a string) for the procedure's argument. It should write out a fixed version with _fixed in the filename.

acer

My first choice for this kind of thing would be to store the Matrix as a member of a module, and have all the working procedures also be exports of that module. You could have the routine which constructs the Matrix be one of the module exports, and if you want store the Matrix as a local of that module (thus still ensuring that it is accesible to all its members).

You have a lot of questions, implicit and explicit. The answers all interrelate, so I hope the following addresses most of it.

Note that Maple does not make a copy of an rtable (Matrix, Array, or Vector) when passing as an argument to a procedure. That is true for several if not most of Maple's mutable data structures (of which these are such). It would be a catasphrophe for performance if it did. One consequence of not doing full copy/evaluations is that in-place operations are possible on rtable arguments. In my experience using Maple heavily for floating-point linear algebra since Maple 6, the ability to do in-place operations on hardware datatype=float[8] rtables is one of the single most important to getting higher performance behaviour.

Maple does not yet have a data structure which represents an rtable whose data is stored (permanetely or semi-permanently) in the GPU's memory. It is still the case, when CUDA is enabled in Maple, that main-memory float[8] hardware double precision rtable data is passed to the GPU's memory with each call to a (the) CUDA function. This involves a very high data transfer cost. Each time you call it, Maple has to push the data of two float[8] Matrices up to the card, and then pull the data of the result float[8] Matrix back down from the card. This is probably so expensive that only an O(n^3) operation like matrix-matrix multiplicaiton would be worthwhile usin this behavioural model. It might even be hitting a law of diminishing returns. (See this very recent review, by someone who is usually pretty aware of how to wring the most out of big-M math software.) Also, using `.` or LinearAlgebra:-MatrixMatrixMultiply to invoke the CUDA version of the underlying matrix-matrix function will produce a new container for the result. Doing so repeatedly will mean that the old results will be memory-managed and garbage-collected by Maple, which is additional overhead.

You do not mention your Operating System. It matters. It sounds as if you are going to be using the BLAS function dgemm, repeatedly, via an entry point from Maple. A datatype=float[8] rtable is a Maple data structure whose data portion is a contiguous portion of main memory. It is a rectangular table (hence, rtable). A Matrix is one flavour of rtable. Maple is simply going to pass the address(es) of the data portions of the relevent float[8] rtables as the relevent arguments of a call to a dgemm function in some shared library. If using CUDA, then it will be the dgemm in the CUDA runtime libraries. On MS-Windows, it will be to the dgemm in the MKL runtime bundled with Maple. On Linux, it will be to the dgemm in a shared ATLAS library. And on OSX it will be to a generic dgemm. On Windows, the MKL will automatically detect how many cores and cpus are available, and use all available by default. That scales pretty well. On Linux, depending on cpu vendor, Maple will only use up to 2 or cores or so, but you can build an ATLAS tuned for a host with a greater number of cores and then drop it in as a replacement.

If Maple is thus already using all available cores, then there is no need to try and break up the matrix-matrix multiplication as if to do Strassen, say, yourself by using Maple's Threads/Task facilities. In any event, note that the external-calls to the shared libs (with the dgemm) are not being made with the THREAD_SAFE option. Which means that Maple only allows one concurrent instance of dl-opening those external libs to run. So only one Maple thread would ever invoke the external dgemm at a given time, anyway.

The external BLAS routines do not put a mutex lock on the data of the rtables, and those functions are themselves threaded.

Note that Maple's time() routine will report something like the sum of all cycles used by all cores. Hence it may spuriously appear that a float[8] Matrix operation doesn't get sped up with the number of cores, even when it does! An alternative is to measure with the variant time[real]() and so see just how long it actually takes in wall-clock time. Of course, do this effectively will entail ensuring that your machine load is otherwise as low as possible.

Ok, where am I? All these topics double back on each other.

Right, let's talk about dgemm. That is the Basic Linear Algebra Subprogram which does double precision general matrix-matrix multiplication. It works like this: given Matrices A, B, C, and scalars alpha and beta, then it performs this,

            alpha*A*B + beta*C -> C

There are several things to admire about that. The first is that the Matrix C, which will contain the result, is acted upon in-place. You could re-use the same C (possibly another module local) as the result container for multiple calls (and incur no memory management, as it wouldn't be collected garbage after each use). If beta=0 then the previous contents of C do not get re-added, but of course you'll still want to fill C with zeroes before calling, if you'd used it before. You do not have to scale A or B before calling dgemm, as you can just use alpha to effect that. Also, there are additional arguments to dgemm which denote whether the Matrices are to be considered as transposed! So you would never waste time transposing the Matrices beforehand. (In fact there is no transposition function in all of BLAS or LAPACK, because nobody should ever waste time doing that since accomodating it could be done with mere changes to the indexing used in the functions.)

Yes, there is a way to access dgemm from Maple so as to make use of such optional subtlety. It's not directly available, but I can post an example in anyone is interested.

In Maple, memory management of large rtables -- even when relatively lean like with float[8] datatype - is expensive. In contrast, a routine like ArrayTools:-Fill can zero out a large float[8] rtable in very short time. This all contributes to why inplace operations can often win the day, for numerical linear algebra in Maple.

My general advice would include things like these points:

- use only datatype=float[8] (or complex[8]) rtables (or Matrices, Vectors, Arrays)

- try very hard to have the code act inplace on a reusable container

- if transposing often either do so with Transpose(...,inplace) command, or use low level dgemm entry point to get rid of all such explicit transpositions

- try to make all scalar operations in the code run in evalhf mode, or be Compilable. (you cannot do external calls in either mode, so make procedure which do the matrix-matrix multiply and linear algebra bits be separate routines from any other hopefully-evalhf'able computations)

- never make a computational routine create a Matrix. Instead, if possible have it accept the Matrix (into which results would be put) as an additional argument. You can create a separate parent procedure, which both creates any needed rtables and then invokes a call to the computational procedures. If you always write your routines this way, then beautiful inplace style savings may fall from the sky

- don't focus on using CUDA, for now

- forget about using Maple's Thread/Task facilities for the matrix-matrix multiplication here, since your cores are likely being all used in each individual dgemm call already. If you want to use Maple's Threading for other parts of the whole job, then split that off from the matrix-matrix multiplication (since dlopens of the relevent external libs are blocking -- only one at a time).-

- if you want to Thread other parts of the whole job, do so using evalf'able procedures and calls. For embarassingly parallelizable jobs I've been able to get an optimal linear speedup with core number. Maple does not put a lock on all the Matrix entries at once -- the mutex seems to give finer grained access. Of course, the fortunate algorithm will involve no mutex lock at all, but we are not always so lucky in our work. That works well. But I have had consistent experiences showing that single (serial use of) Compiled procedures acting inplace and optimally on float[8] rtables perform about eight times as fast as do Task/Threaded parallelized procedures also acting inplace and optimally under evalhf. So the Compiler can often beat the Task model of threading in Maple, up to about 4-8 cores.

acer

Could you try making your module a named module ? (There seemed to be a hint, in the debugger, at the juncture where it went wrong.)

restart:
with(CodeTools):

M := module m() option package; export f; f := proc() end proc; end module;

         module m () export f; option package; end module

EncodeName(M:-f);

_Inert_ASSIGNEDLOCALNAME("f", "PROC", 0, _Inert_ATTRIBUTE(

  _Inert_EXPSEQ(_Inert_EQUATION(_Inert_NAME("modulename"), 

  _Inert_ASSIGNEDNAME("m", "MODULE", _Inert_ATTRIBUTE(_Inert_NAME(

  "protected", _Inert_ATTRIBUTE(_Inert_NAME("protected")))))), 

  _Inert_NAME("protected", 

  _Inert_ATTRIBUTE(_Inert_NAME("protected"))))))

DecodeName(%);

                              m:-f

I'll have to test, but I wonder whether it would also be ok if the module was being read in from a .mla archive.

acer

Curiously, something very similar was asked just a day or so ago.

> restart:

> b:=Vector([lambda-2, lambda+23]):

> assume(lambda, real);

> subs(lambda=1, b);

                                [lambda~ - 2 ]
                                [            ]
                                [lambda~ + 23]

> eval(b, lambda=1);   
                                [lambda~ - 2 ]
                                [            ]
                                [lambda~ + 23]

> subs(lambda=1, rtable_eval(b));

                                     [-1]
                                     [  ]
                                     [24]

> eval(rtable_eval(b), lambda=1);
                                     [-1]
                                     [  ]
                                     [24]

> rtable_eval(b,inplace): # only need this once

> subs(lambda=1, b);

                                     [-1]
                                     [  ]
                                     [24]

> eval(b, lambda=1);             
                                     [-1]
                                     [  ]
                                     [24]

acer

If you had a graph for the Delaunay triangulation, would that help? (Ie, here.)

acer

It can currently be implemented for the current worksheet in a similar way how I did those resized plots. And it can be done already, fully programmatically, if the Component is inserted into a (also programmatically opened) new worksheet.

The method consists of emitting the XML, and then "launching" it in some way.

I hope that in future there will be a 1-step way to insert into the current worksheet. With such functionality, there are all sorts of other exciting possibilities.

I have some code for emitting the XML of GUI (nested, layout) tables and sliders. Maybe I will find time to share. I am in krunch mode at work, however.

acer

As a general rule, never use a name like `N` without any subscript if you are also using it as an indexed name (which gets printed with a subscript).

So here we can use N0 and N1 instead of N[0] and N[1] (which display as N[0] and N[1]).

restart:

N0:=s->1-s;

                           s -> 1 - s

N1:=s->s;

                             s -> s

N:=s->[N0(s), N1(s)];

                      s -> [N0(s), N1(s)]

N(s);
                           [1 - s, s]

N(4);
                            [-3, 4]

 

If you don't like this workaround, and if you want to get confused as a new user, then search for "atomic identifiers" as another (subscripted)  solution.

By the way, the square backets are giving you lists, which are not the same as Matrices in Maple.

acer

How's this?

It's no better than the quality of the dictionary being used. And there are some words missing from the builtin dictionary. One can, however, read in one's own set of words and assign it to the variable `words`, which the routine `findall` below accesses as a global variable.

restart:

with(StringTools):
with(PatternDictionary):

bid:= Create(`builtin`):
words:= [seq](Get(bid,i),i=1..Size(bid)-1):

unwith(PatternDictionary):
unwith(StringTools):

findall:=proc(input::string,char::string,lo::posint,hi::posint)
global words;
local i, all, pos, final;
uses StringTools;
   if length(char)<>1 then
      error "expecting single character for 2nd argument";
   end if;
   if hi>length(input) then
      error "expecting 4th argument to be posint at most the length of 1st argument";
   end if;
   all:=Explode(input);
   pos:=Search("r",input);
   if pos>0 then
      all := [all[1..pos-1][],all[pos+1..-1][]];
   end if;
   for i from lo to hi do
      final[i]:=seq(Anagrams(Implode([x[],char]),words),
                    x in {combinat:-choose(all,i-1)[]});
   end do;
   eval(final);
end proc:

findall("speak", "k", 2, 4);

              table([2 = (), 3 = "ask", 4 = ("peak", "sake")])

findall("collapsing", "p", 4, 9);

              table([4 = ("pica", "clap", "capo", "pang", "gasp", "pail", "pain", "pall",
                          "plan", "opal", "slap", "span", "snap", "soap", "clip", "ping",
                          "pong", "pill", "lisp", "slip", "pion", "snip", "spin", "ipso",
                          "poll", "plop", "slop"),
                     5 = ("panic", "clasp", "scalp", "plain", "piano", "spill", "spoil",
                          "polis"),
                     6 = ("gallop", "spinal", "poplin"),
                     7 = ("scallop", "sapling"),
                     8 = (),
                     9 = ()])

convert(%,list);

              ["pica", "clap", "capo", "pang", "gasp", "pail", "pain", "pall",

                "plan", "opal", "slap", "span", "snap", "soap", "clip", "ping",

                "pong", "pill", "lisp", "slip", "pion", "snip", "spin", "ipso",

                "poll", "plop", "slop", "panic", "clasp", "scalp", "plain",

                "piano", "spill", "spoil", "polis", "gallop", "spinal",

                "poplin", "scallop", "sapling"]

findall("iocmrelae", "r", 4, 9):

convert(%,list);

              ["race", "care", "acre", "cram", "real", "earl", "ream", "mare",

                "lair", "rail", "liar", "oral", "roam", "rice", "core", "leer",

                "reel", "mere", "mire", "rime", "lore", "role", "more", "roil",

                "clear", "cream", "carol", "coral", "macro", "realm", "moral",

                "molar", "relic", "crime", "micro", "moire", "morel", "cereal",

                "oracle", "morale", "miracle", "calorie"]

acer

Hopefully I'm now understanding your explanation of the problem.

If you are trying to find N and non-zero X such that K.X=0 hold approximately, then Matrix K should have an eigenvalue close to zero. So, you could try and find an N such that the smallest absolute value of K's eigenvalues is close to zero.

restart:
randomize():
with(LinearAlgebra):
Digits:=20:
interface(warnlevel=0):

# eps is a tolerance, how close to zero for an accepted eigenvalue
eps := 1e-10:

m := 3: # increase, while adding other "known" Matrices
A:=LinearAlgebra:-RandomMatrix(m,generator=-1.0..1.0):
B:=LinearAlgebra:-RandomMatrix(m,generator=-1.0..1.0):
C:=LinearAlgebra:-RandomMatrix(m,generator=-1.0..1.0):

# routine `z` finds N such that K(N) has near-zero as an eigenvalue
z:=N->min(abs(LinearAlgebra:-Eigenvalues(A+B*N+C*N^2))):

K:=N->A+B*N+C*N^2:

sol:=Optimization:-Minimize(z,-600..700):

success:=false:
if sol[1] > eps then
   error "no value of N found giving a near-zero eigenvalue";
else
   success:=true;
   Nsol := sol[2][1];
   Ksol := K(Nsol);
   (evals,Xtry) :=LinearAlgebra:-Eigenvectors(Ksol);
   Xsol:=Matrix([seq(`if`(eps>=abs(evals[i]),Xtry[1..-1,i],NULL),i=1..m)]);
end if:

if success then
  Nsol, Xsol;
  K(Nsol);
  K(Nsol) . Xsol;
end if;

                                   [0.27797677482440519410 + 0. I ]
                                   [                              ]
          -0.48910186887266487015, [-0.69924889690086809645 + 0. I]
                                   [                              ]
                                   [-0.65861968755962728970 + 0. I]

 [ 0.06944242311757242511   0.95596328018536999674  -0.98562630390977603059]
 [                                                                         ]
 [-0.07922533330327931080  -0.32370935082657659106   0.31024065587712641287]
 [                                                                         ]
 [ -1.0251409778044306607  -0.44163425638354885242  0.036207365656109929698]

                         [              -12       ]
                         [-1.53402370 10    + 0. I]
                         [                        ]
                         [             -12        ]
                         [3.85882681 10    + 0. I ]
                         [                        ]
                         [              -12       ]
                         [3.634613295 10    + 0. I]

If you want extra qualifications on N (eg. that N>0 say) then change the range option in the call to `Minimize`, such as 0..infinity or 0..100 etc.

We don't know what your known Matrices are. The above will not succeed for every random A, B, and C;

acer

If I understand your question then you could put `option remember` on the routines which compute the numerical solution. See this comment (and those that follow it).

acer

It is harder to find because it is a Comment. And a Comment is not an Anwser, Question, or Post. And there is no easy way to see a list of your Comments, from your Profile page.

But here is a link to all your postings.

acer

It is easy enough to rotate around another vertical line (what the Asker calls a z-axis) by shifting the origin in the x-y plane though a simple change of variables in the original expression.

After that, the plotted axes (using the usual axes system, thus allowing for tickmarks, placement, etc) have their origin at the desired new point.

If the new origin is to be the minimum of a paraboloid, then the `Minimize` function can find that directly. Setting up derivatives and solving is not necessary.

Tickmarks which show the shifted origin are easy to contruct.

restart:

P := 10.715691225750079453-.52581617253900429073*x+.50951826882694897598*x^2
     -32.921695149953028411*y+45.846724612790471380*y^2-5.9292825745917680635*x*y:

sol := Optimization:-Minimize(P);

          sol := [-0.7380131370346, [x = 4.17647809408785609, y = 0.629109433919071370]]

xsol,ysol := op([2,1,2],sol), op([2,2,2],sol);

                   xsol, ysol := 4.17647809408785609, 0.629109433919071370

xwidth,ywidth := 20, 5:
maxwidth := max(xwidth,ywidth):

newP := subs(x=x-xsol,y=y-ysol,P): # shift origin, in formula P

plotP := plot3d(newP,x=-xwidth..xwidth,y=-ywidth..ywidth):

numframes:=25:

plots:-animate(plottools:-rotate
               , [plotP,A*2*Pi/numframes,[[0,0,0],[0,0,1]]]
               , A=1..numframes
               , axes=normal, labels=[x,y,z]
               , tickmarks=[[seq(i-xsol=i,i=trunc(-maxwidth-xsol)
                                                  ..trunc(maxwidth+xsol),
                                 maxwidth/5)],
                            [seq(i-ysol=i,i=trunc(-maxwidth-ysol)
                                                  ..trunc(maxwidth+ysol),
                                 maxwidth/5)],
                            default]);

This also runs OK in Maple 13.02, Standard GUI.

acer

You show lists, but you mention Matrices.

For lists, you can concatenate using Kitonum's way.

For Matrices, it could be entered like so,

A := <<1|2>>;

                                 A := [1  2]

B := <<3|4>>;

                                 B := [3  4]

C := <A|B>;

                              C := [1  2  3  4]

acer

First 269 270 271 272 273 274 275 Last Page 271 of 336