acer

32313 Reputation

29 Badges

19 years, 314 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Thanks Alec! That's better for Primes, but worse for me, as the inability-to-post problem looks to be at my end.

A related question: is there anyone for whom the 2DMath displayed in Alec's response looks good? For me, it doesn't even look as good as the 2DMath in the Online Help using the exact same browser and session. It's small, and more jagged than even lack of antialising usually gives.

acer

Give it the "bug" tag, Alec.

It's an outright bug that assumptions are put on x merely by calling `series`, present in 13.01 (and probably 13.02 too). Which makes it present in the version most people are using at this date.

"Du musst Amboss oder Hammer sein." - Goethe

acer

Does LinearAlgebra:-SylvesterSolve do this?

You started with this,

> Z.C.X + C = D;

(Z . C . X) + C = D

So, now if you right-multiply by the inverse of X, you get,

> Z.C.X.X^(-1) + C.X^(`-1`) = D.X^(`-1`);

-1 -1

(Z . C) + (C . (X )) = D . (X )

Now, if you read the help-page for LinearAlgebra:-SylvesterSolve, it says that it solves Matrix equations of the form A . X + isgn * X . B = scale*C.

And you seem to have that form, except that your C is their X, your Z is their A, your X^(-1) is their B, and your D.X^(-1) is their C. If that's correct, then how about calling,

LinearAlgebra:-SylvesterSolve(Z, X^(-1), D.X^(-1) );

You'd want to check that I didn't make a mistake in all that. If it doesn't work, maybe you could post or upload your actual Matrices.

acer

Who thinks that the original student's numeric task cannot be implemented in Maple within a factor of four or better of the Excel timing? How about a factor of 2?

Nick, please upload the Maple program if you are so permitted. If it cannot be achieved, then at least people will know more about what needs improving.

acer

Good question. When you pass an rtable (ie. any of Matrix, Array, or Vector) as an argument to a procedure call it gets passed similarly to what you called "by reference".

The procedure gets the same rtable object, not a copy, of what was passed. Hence changes made to the entries of that rtable parameter, while inside that procedure, also change the entries in the object at the higher level.

acer

@Nick lists are not ordered, in the sense that Maple retains their entries in the original order in which you input them or create the object. This is in contrast with sets.

Also, the syntax for the Vector constructor is with round-brackets or angle-brackets, like so,

Vector([1,2,3]);

<1,2,3>;

You have several choices about how you create and use your Array of Vectors. If you know the number of Vectors in advance, then you could simply use M:=Matrix(3,120) say, and code it so that references to the jth Vector was like M[1..3,j] or M[i,j] etc. But if you do that, then any time you had to pull out a column (ie, a 3-Vector) you might be creating a new object which has some performance issues if done many times. Of course, you might be able write your code to always reference the M[i,j] entries, and so avoid "pulling out" the Vectors; it depends on whether you have to pass them separately to another routine. (I don't want to barrage you with too much info, but there is also a round-bracket mechanism to help deal with grabbing subselections of Matrices.)

You could also do exactly what you wrote. Ie, first create A:=Array(1..120) then then assign to A[i] any Vector(3) that you created. This might avoid repeated or unnecessary creation/collection of objects. In this scenario, you would access the ith component of the jth Vector as A[j][i].

You might also want to look at the Portal page on such objects, for clarification of their differences. Note especially that Maple lists are not really mutable structures as are 1D Arrays

Finally, if you do use a 1D structure that contains Vector objects instead of a 2D Matrix then you have a choice between plain ol' Vector and the version of Vector available under the VectorCalculus package. If you load that package using `with` then you automatically get its version when you call Vector(). If you use the former you can easily do Curl and all that stuff with quite simple constructions of commands. If you use the latter then you get such vector calculus stuff for free as commands in the VectorCalculus package. But note that not everyone finds the coordinate system stuff that is built into VectorCalculus Vectors to fit their expectations.

Hope that helps some.

acer

You're not really being clear about the "repeats" and "a" stuff, since you use a both inside and outside your proc. I see two ways in which you could get repeats in the answer: the first due to duplicates in the data list `a`, and the second from duplicated random values for the indices b||i. If you convert your data list `a` to a Maple set then that would be uniquified. So that would be easy. The other part, about not generating repeated values for your b||i is handled by alternate methodology of the code below.

There are a lot of really slow and inefficient ways to do this, most of which would build up a set or a list entry by entry by the [..op(..)] or {..,op(..)} archetypical methods.

I think that one could re-use part of some earlier efficient code intended for shuffling Array rows. The idea there was to first get a randperm of the m row numbers (ie, rand perm of 1..m). But one might compute just n of those instead, where 1<=n<=m.

So, basically, you first get a randcomb of n of the numbers 1..m. And then you randperm that group of n numbers. Then you simply select entries from list `a` or from convert(a,set) using that last result.

Then you notice that you didn't have to actually form a new list or Vector of the randperm of n values, you can just take the last -n..-1 entries off the first result's shuffle.

And we avoid using randcomb and randperm, for fun. And we make it Compilable, so that we can compute 100000 of them in a second or two.

> randV:=proc(m::integer,n::integer,
> V::Vector(datatype=integer[4]),
> ransam::Vector(datatype=integer[4]))
> local i::integer, j::integer, count::integer,
> need::integer, cand::integer;
> count:=ransam[1];
> for i from 1 to m do
> V[i]:=i;
> end do;
> for i from m to (m-n-1) by -1 do
> need:=1;
> while need=1 do
> count:=count+1;
> cand:=ransam[1+count];
> if cand<=i then
> need:=0;
> end if;
> end do;
> V[i],V[cand]:=V[cand],V[i];
> end do;
> ransam[1]:=count;
> NULL;
> end proc:
>
> crandV:=Compiler:-Compile(randV): # or use slow randV
>
> a:=[3,6,7,8,3,6,5,7,8,9,1,8,2,8,8,3,4]:
> m:=nops(a):
> n:=3:
>
> maxnum:=100000:
> rsrc:=LinearAlgebra:-RandomVector(maxnum*m*m,generator=1..m,
> outputoptions=[datatype=integer[4]]):
memory used=117.5MB, alloc=117.1MB, time=2.02
> gcount:=0:
>
> B:=Vector[row](m,datatype=integer[4]):
> crandV(m,n,B,rsrc);
> seq(a[B[m-i]],i=0..n-1);
8, 9, 3

>
> crandV(m,n,B,rsrc);
> seq(a[B[m-i]],i=0..n-1);
2, 3, 5

>
> crandV(m,n,B,rsrc);
> seq(a[B[m-i]],i=0..n-1);
8, 7, 8

>
> a_set:=convert(a,set):
> m:=nops(a_set):
>
> B:=Vector[row](m,datatype=integer[4]):
> crandV(m,n,B,rsrc);
> seq(a_set[B[m-i]],i=0..n-1);
1, 4, 6

>
> crandV(m,n,B,rsrc);
> seq(a_set[B[m-i]],i=0..n-1);
1, 7, 3

>
> crandV(m,n,B,rsrc);
> seq(a_set[B[m-i]],i=0..n-1);
9, 5, 8

>
# now time it
> m:=10: n:=6:
> st:=time():
> B:=Vector[row](m,datatype=integer[4]):
> gcount:=0:
> for k from 1 to maxnum do
> crandV(m,n,B,rsrc);
> end do:
> time()-st; # time to do maxnum such picks
0.495

By the way, it is quite inefficient to do b||i:=rand(1..m)() inside a loop. The rand(1..m) should first be assigned to something like f outside the loop, and then f() called repeatedly inside the loop. And an indexed name like b[i] should be used instead of creating all thos concatenated names b||i.

acer

I guess that this is finally possible, in Maple 14, with the new interface(helpbrowser) setting.

acer

The most efficient way is to do it inplace on the re-used result, since you only need one shuffle at a time. This means to use the mutability of Matrices in the best way. I have to go out for the day. I will show how later, if nobody else does before then. You can also use simple loops for this, if Compiling.

acer

Note that it is not possible to get "truly" random fixed-precision floating-point samples of a continuous distribution on a digital computer because the number of possible floats is finite. Also, in the absence of a connection to an external physical mechanism, the code tends to be wholly deterministic.

So what you get on a digital computer are pseudorandom numbers.

Now, Hammersley (I think it was) once wrote in the intro to a book on Monte Carlo that there an infinite number of statistical tests of a random sample. And any given finite sample must necessarily fail an infinite subset of those tests (no matter how many it passes). So keep in mind that, in this point of view, there is only one test that one's sampling scheme should pass: it should provide an unbiased estimator of the quantity which you want to compute.

At first glance, the above sounds facile. But it is actually quite important to remind ourselves that "truly" random numbers are not necessary. Sometimes a specialized Random Number Generator can provide better samples for a particular task. For example, for the purpose of numerical quadrature it has been shown that certain families of linear congruential generators, which produce samples poor for general purposes, can make the error term go from 1/sqrt(N) to more like 1/N.

And that's something to keep in mind. Even for "1D" Monte Carlo the error term often goes down only like 1/sqrt(N). That would mean that one might require 100 as many elements of the sample just to get 10 times as must accuracy (ie. to pick up just one more decimal digits of accuracy). So an error that goes like 1/N needs only ten times as many points to get one more decimal digit of accuracy, and so is more desirable. Of course things can get much worse for multi dimensional problems, where the dimension d enters that as a power.

So, how about telling us what you are trying to do? Are you trying to produce and test random deviates/variates (If you are approximating some integral of a monotonic function then there are tricks to reduce the variance in the Monte Carlo result and gain accuracy. That's just an example. Whether there are tricks to aid your task depends on what it is.)

While I am rambling: if one wants to produce random deviates/variates by the so-called inversion method, using the CDF formula, then the arithmetic can often by put entirely into Matrix operations. That is to say, one might produce all the variates at once, instead of sequentially. This can be super-efficient because it can use hardware float[8] Matrices and parallelized external BLAS. I remember doing this to get Normal samples very quickly, long before (Maple 6, 7) the "new" Statistics package arrived. I just felt like mentioning it...

acer

Did you intend,

X||E1 := X||E1+E1:

instead of,

X||E1 := X||E1+1:

(There are also some efficiency concerns in how it is written. However it lacks code-comments so its exact purpose must be surmised. Do you really need to concatenate names each time in a loop, and can think of no other way? I apologize if tha sounds harsh -- I mean it as helpful only.)

There's a lot to say about Maple and using random numbers for Monte Carlo, but I wanted to ask that quick question first.

acer

In Maple 13.01, it can be done more quickly, with less memory resources and garbage collection, as follows:

PP := proc(y) evalf(Int(x->x^2/(exp(x)-1),y..infinity)); end proc:

plot(PP,2..10);

One could also adjust that by precomputing the tail just once, similar to the earlier reply. But using an operator form of the integrand seems to bypass some otherwise expensive discont checks.

acer

You can accomplish something like this by updating a Plot component.

Suppose that you have inserted a Plot component (%Plot0 say) using the Components palette. The following code will update it with new plots, every few seconds, each with one more point added.

f:=proc()
local i,j,k,glist;
for j from 1 to 20 do
  forget(evalf):
  for i to 10000 do sin(1.0*j*i); end do;
    glist[j]:=plots:-pointplot([[j,sin(j/17.0)]]);
    DocumentTools:-SetProperty(Plot0,'value',
         plots:-display(seq(glist[k],k=1..j)),'refresh'=true);
end do;
NULL;
end proc:

That `i` loop and `forget` call are there just to give it a brief delay, so the effect can be seen clearly. You can of course remove them both. And, presumably, instead of my sin(j/17.0) you would have some successive calculation that takes a while.

It also looks nice if the option view=[0..20,0..1] is added to that display call (after the seq). Adjust to taste.


acer

If you are using the commandline Maple 14 interface, then you can set the 'helpbrowser' interface option to 'text', to get the ascii help.

interface(helpbrowser=text):

In the Standard GUI, that command generates an error (ie, protected and not allowed). If you are in the Standard GUI, and you really, really want plaintext based help, let me know and I can offer a devious hack.

Note that in the commandline interface of Maple 14, with helpbrowser=text, some help pages come up wonky. Ironically, the ?interface help-page is one that displays badly formatted (for me).

acer

At about 6-8 sec per objective evaluation, allowing NLPSolve to use its default methods which utilize derivatives, some sort of result returned after about 30 min. It took longer, hard coding the default quadrature method to _Gquad rather than _CCquad.

h := (c0, c1, c2, R, r) -> sqrt(R^2-r^2)*(c0+c1*r^2/R^2+c2*r^4/R^4):

K := (c0, c1, c2, R, r) -> ((diff(h(c0, c1, c2, R, r), r))*
   (1+(diff(h(c0, c1, c2, R, r), r))^2)+
   r*(diff(h(c0, c1, c2, R, r), r, r)))/
   (r*(1+(diff(h(c0, c1, c2, R, r), r))^2)^(3/2)):

f := (c0, c1, c2, R, r) -> r*sqrt(1+(diff(h(c0, c1, c2, R, r), r))^2)*
   K(c0, c1, c2, R, r)^2:

F:=proc(c0,c1,c2,R)
local res;
if not type(c0,numeric) then
  return 'procname'(args);
else
  res:=evalf(Int(f(c0,c1,c2,R,x),x=0..R,method=_CCquad));
  if not type(res,numeric) then
    res:=evalf(Int(f(c0,c1,c2,R,x),
                   x=0..R,epsilon=0.00001));
    if not type(res,numeric) then
      res:=1000.0;
    end if;
  end if;
end if;
res;
end proc:

E := proc (c0, c1, c2) F(c0, c1, c2, 3.91) end proc;

Optimization:-NLPSolve(E(c0, c1, c2),
       variables=[c0,c1,c2],
       initialpoint=[c0=.1035805,c1=1.001279,c2=-.561381]);

That returned,

[4.00000000007146994, [c0 = 1.00000053433322145,

                                 -5                              -5
    c1 = -0.328526474190508903 10  , c2 = 0.514721350878141067 10  ]]

Note that it might only be a local optimum.

And the smoothness of f might have been compromised in the above code, with that "fake" 1000.0 return value for individual failing objective value computations. Also, some of the individual quadrature results might only be accurate to the fallback tolerance of epsilon=0.00001 so that too may be affecting the result's accuracy (but not always).

Starting from initial point c0=1.0, c1=0.0, c2=0.0 took ten mins to get,

[4.00000000005466028, [c0 = 1.00000000001311972,

                                -8                              -6
    c1 = 0.165449448974536908 10  , c2 = 0.154581310067891980 10  ]]

Without raising Digits and adjusting the fallback epsilon (ie. smaller accuracy tolerance) it wasn't clear to me whether 1.0,0.0,0.0 was the actual extreme point. I didn't think about it analytically.

acer

First 288 289 290 291 292 293 294 Last Page 290 of 336