acer

32313 Reputation

29 Badges

19 years, 314 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

You are overlooking the effects of round-off error during the floating-point computation (ie. during the evaluations of diff(sCARA4,mu) at floating-point values). Remember that the Digits environment variable controls the working precision and is not a target accuracy tolerance.

> restart:

> sCARA4 := -ln(-(mu/sigma^2)^(mu^2/(mu-sigma^2))*(sigma^2/mu)^(mu^2/(mu-sigma^2))
> +(sigma^2/mu)^(mu^2/(mu-sigma^2))*((exp(phi)*sigma^2+mu-sigma^2)
> *exp(-phi)/sigma^2)^(mu^2/(mu-sigma^2))+1)/phi:

> dsCARA4:=diff(sCARA4,mu):

> eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]);
-2.499130625

> evalf[20](eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]));
0.99919104807402453122

Note also that, even though simplification (using `simplify`, with or without assumptions) seems to help below, there is no guarantee in general that it will generate a form of a symbolic expression with better conditioning during floating-point evaluation.

> eval(dsCARA4,[mu=29.38,phi=1.0,sigma=1.0]);
-2.499130625
> eval(simplify(dsCARA4),[mu=29.38,phi=1.0,sigma=1.0]);
1.204768329
> eval(simplify(dsCARA4),[mu=29.38,phi=1.0,sigma=1.0]) assuming positive;
0.9991907685

You can also look at ?evalr

You are overlooking analysis during the simplification of sCARA4. You might compare output from these,

simplify(sCARA4);

simplify(sCARA4) assuming positive;

acer

The two results are "the same" in the sense that one is merely a scaled version of the other. See below, where the final versions differ a just a small amount (reflecting round-off error during either the solving or the subsequent manipulation).

> p[R1] := (214-.7067494572*q[O]-.3533747286*q[R1]+.1766873643*q[S]^2-.1766873643*q[S]*q[R1]+5.879032258*q[S])/(1+0.7258064513e-1*q[S]):

> a1:=rhs(isolate(p[R1],q[R1])):

> a2:=solve(p[R1],q[R1]):

> fnormal(a2); # or expand(numer(a2))/denom(a2)
                                                     2                   
    1211.178857 - 4.000000000 q[O] + 1.000000000 q[S]  + 33.27364286 q[S]
    ---------------------------------------------------------------------
                                  2. + q[S]                              

> expand((numer(a1)/0.176687364))/expand(denom(a1)/0.176687364);
                                                     2                   
    1211.178859 - 4.000000007 q[O] + 1.000000002 q[S]  + 33.27364292 q[S]
    ---------------------------------------------------------------------
                       2.000000003 + 1.000000002 q[S]                    

But they are likely different in how they evaluate at a particular point. If one evaluates at numeric values for the unknowns, the results might differ. That's because numeric computation error may build up in different ways, in each.

Which is more numerically stable? In general, I doubt either method would always be "better" than the other. The lesson here should instead be that it's not always a great idea to mix floats and symbolics and expect every popular top-level routine to handle them exactly.

You could also consider converting p[R1] to rational. See ?convert,rational

note: The calling sequences collection of help-page ?convert,rational is wrong. The first argument can be an expression, and the effect maps over that. So, in this example, convert(p[R1],rational) is a possibility.

acer

You asked,
"if I hold the list as a private module variable and export a module procedure to return the weights, is Maple going to make a copy of the list every time a task invokes my module's "GetWeights" procedure?"

I believe that, if I understand your question properly, the answer is no. Maple only has one unique copy of any given list at any given time. If you assign into a list (or use subsop to try and cheat) then Maple creates a new list because (as Roman mentioned) lists are actually immutable. They may just appear to be mutable, by a Maple kernel sleight-of-hand.

Consider the following example, and ask why the higher level list L is not changed. The answer is not due to Maple passing a full copy of L into procedure p. It is because the later act of altering an entry of p's local T (with initial same ID as p's formal parameter LL) creates a new list. Note that the initial address of local T is the same as that of L, prior to assignment into an entry of T. The copying of L occurred when T[3] was altered, and not when L was passed into `p`. Moral: if you don't try and alter LL (or T), you should be OK as far as non-copying goes for performance. So avoiding the rtable locks, by converting the weights to lists (after all weights are initially computed, of course) looks ok.

> L:=[1,3,5,7]:
> addressof(L);
181177616
> p:=proc(LL)
> local T;
> T:=LL;
> printf("initial address of T: %a\n",addressof(T));
> T[3]:=z;
> printf("subsequent address of T: %a\n",addressof(T));
> T;
> end proc:

> p(L);
initial address of T: 181177616
subsequent address of T: 193667904
[1, 3, z, 7]

> L;
addressof(L);
[1, 3, 5, 7]
181177616

Joke: If Maple didn't behave like this then its list performance would be trash, and we'd all be passing addressof(L) into a procs which initially set up list locals with pointto().

acer

What the replies of hirnyk and John mean is that Maple is not changing the variable's identity on you. It is a question of how the name `xi` gets printed and displayed in output, not of its identity.

I don't see any convenient way to suppress that 2D display of all given symbols. And I didn't see `xi` in the Typesetting:-RuleAssistant() Maplet.

It might be that Library routines inside the Typesetting package produce something like `#mi(xi)` or whatever, in the final expression sent to the GUI for display. And it might even be that one could hack the Typesetting package to adjust that behaviour. But it could be a lot of effort, that might not even work.

Perhaps you could submit a Software Change Request, about suppressing the 2D Math display of chosen names?

acer

There are four subtypes of rtable: Array, Matrix, Vector[row], and Vector[column].

The indices of an Array can start from an arbitrary integer (including 0, or negative values) while the Matrix and Vector indices start from 1.

One basic idea is that a Vector represents a member of a vector space, while a Matrix repesents a linear mapping between vector spaces. Hence these objects are understood by the LinearAlgebra package, and the noncommunative `.` operator uses the traditional linear algebra multiplication for them (because it leads to useful places). A Vector is a 1D rtable, and a Matrix is a 2D rtable, and both have a mathematical meaning.

In contrast, an Array can be any dimension (number of distinct indices) from 64. And it doesn't have a specific mathematical meaning (ie. flexible). Hence `.` acts elementwise when "multiplying" such objects.

So, if you want to use rtables for higher dimension indexed computation (eg. tensor stuff) then Arrays are a viable choice.

You can build all these rtables using the rtable() constructor. You can pick them apart with a sundry of utility functions such as rtable_options(), etc.

A 1D Array whose index starts from 1 can be converted into a Vector, in-place, using the rtable_options command to change the 'subtype'. Similaryly for 1-based 2D Arrays to and from Matrix.

> A:=Array(1..2,1..2,[[a,b],[c,d]]):

> A.A;
                [ a^2  b^2 ]
                [          ]
                [c^2   d^d ]

> rtable_options(A,'subtype');
                                    Array

> rtable_options(A,'subtype'='Matrix');

> A.A;
                [ a^2+b c      a b+b d]
                [                     ]
                [ c a+d c      b c+d^2]

> rtable_options(A,'subtype');
                                   Matrix

What else? The Vector() and Matrix() constructors have angle-bracket shortcut syntax (which use to be slower due to overhead, but not anymore). The shortcut syntax doesn't allow you to specify options such as datatype, shape, etc.

The Matrix() and Vector() constructors both tend to flatten arguments.  By which I mean, if you pass them a collection of other smaller Matrix/Vector objects then they will try to flatten out the result as one larger object. But on the other hand there's nothing to stop you from assigning anything, unflattened, into individual entries of a pre-existingVector, Matrix, or Array.

acer

Interesting question.

Smart money says that someone could improve the following,

> PE := A.X+B.U+L.(Y-C.X-D.U);
            A . X + B . U + L . (Y - C . X - D . U)

> rx:=proc(x,z)
  local ox,sx,i;
  if type(x,`*`) then
    ox,sx:=selectremove(t->type(t,`.`),[op(x)]);
    ox:=op(ox);
  else
    ox,sx:=x,1;
  end if;
  if type(ox,`.`) and op(nops(ox),ox)=z then
      [`*`(op(sx),`.`(seq(op(i,ox),i=1..nops(ox)-1))),z];
  else x;
  end if;
end proc:

> rightcollect:=proc(x,z)
local spl,t,locx;
  locx:=subs(`&*`=`.`,expand(subs(`.`=`&*`,x)));
  if type(x,`+`) then
    spl:=selectremove(t->type(t,list),map(rx,[op(locx)],z));
    `+`(op(spl[2]))+`+`(seq(op(1,t),t in spl[1])).z;
  else x;
  end if;
end proc:

> rightcollect(PE,X);
          B . U + L . Y - L . D . U + (A - L . C) . X

> PredictorEstimator := diff(X(t), t) = A.X(t)+B.U+L.(Y-C.X(t)-D.U);
   d                                                          
  --- X(t) = A . (X(t)) + B . U + L . (Y - C . (X(t)) - D . U)
   dt                                                         

> map(rightcollect,PredictorEstimator,X(t));
   d                                                         
  --- X(t) = B . U + L . Y - L . D . U + (A - L . C) . (X(t))
   dt                                                        

A related item: is it just an oversight that `expand` can deal with `&*` but not with `.`? Does anyone else have trouble writing an `expand/.` routine to do x->subs(`&*`=`.`,expand(subs(`.`=`&*`,x)))? Is it anything to do with `expandon`?

acer

inner_proc:=proc(M::integer,
                 A::Vector(datatype=float[8]),
                 B::Vector(datatype=float[8]))
local count::integer, i::integer;
count:=0;
for i from 1 to M do
  if sqrt(A[i]^2+B[i]^2) < 0.5 then
    count:=count+1;
  end if;
end do;
return 4.0*count/M;
end proc:

cinner_proc:=Compiler:-Compile(inner_proc):
p:=proc(N) # N number of points local F,X,Y,n,inner_proc,P; n:=N; F:=Statistics:-RandomVariable(Uniform(-1/2, 1/2)); X:=Statistics:-Sample(F,n); Y:=Statistics:-Sample(F,n); cinner_proc(n,X,Y); end proc: st:=time(): p(2*10^5); time()-st; 3.14218000000000020 0.063

I tried to resist the strong temptation to change how it works, and to be faithful to the given method.

acer

inner_proc:=proc(M::integer,
                 A::Vector(datatype=float[8]),
                 B::Vector(datatype=float[8]))
local count::integer, i::integer;
count:=0.0;
for i from 1 to M do
  if sqrt(A[i]^2+B[i]^2) < 0.5 then
    count:=count+1.0;
  end if;
end do;
return 4.0*count/M;
end proc:

cinner_proc:=Compiler:-Compile(inner_proc):
p:=proc(N) # N number of points local F,X,Y,n,inner_proc,P; n:=N; F:=Statistics:-RandomVariable(Uniform(-1/2, 1/2)); X:=Statistics:-Sample(F,n); Y:=Statistics:-Sample(F,n); cinner_proc(n,X,Y); end proc: st:=time(): p(2*10^5); time()-st; 3.14250000000000007 0.078

acer

What you are doing now is a bit like this (simplified to just 1 loop),

> restart:

> lc:=1:
> for ii from 1 to 3 do
> T[ii]:=i->lc;
> lc:=lc+1;
> end do:

> T[1](), T[2](), T[3]();
4, 4, 4

> lc:=17:
> T[1](), T[2](), T[3]();
17, 17, 17

But it seems like you want behaviour more like this,

> restart:

> lc:=1:
> for ii from 1 to 3 do
> T[ii]:=unapply(lc,i);
> lc:=lc+1;
> end do:

> T[1](), T[2](), T[3]();
1, 2, 3

> lc:=17:
> T[1](), T[2](), T[3]();
1, 2, 3

Is that closer to what you are after?

Notice also that there is no need to have the T indices be the same as the formal parameters of the operators (ie, i,j,k). Having it so may be just additionally confusing to you.

acer

Sorry, but 100*cos(20) is an exact quantity. (Recall, Maple is a computer algebra system, in which there is an important distinction since algebraic manipulation of exact quanitites is useful.) You can apply evalf or evalhf to it, to get a floating-point quantity.

Or, you could ensure that the argument to sin is itself a float, so that it returns a float. This is probably a better solution for you than explicit using explicit evalf/evalhf calls, if you intend to use Compile on the procedure.

Maple will not automatically cast an exact numeric quantity to a float when assigning into a float[8] Matrix, except in the cases that the exact quantity is an integer or a rational.

acer

> with(LinearAlgebra):
>
> A:=Matrix([[A11,A12],[A21,A22]]):
>
> InvA := Matrix([[(A11-A12&*A22^(-1)&*A21)^(-1),
>           -A11^(-1)&*A12&*(A22-A21&*A11^(-1)&*A12)^(-1)],
>         [-A22^(-1)&*A21&*(A11-A12&*A22^(-1)&*A21)^(-1),
>          (A22-A21&*A11^(-1)&*A12)^(-1)]]);
InvA :=
 
    [             1
    [--------------------------- ,
    [      //        1 \       \
    [A11 - ||A12 &* ---| &* A21|
    [      \\       A22/       /
 
     // 1        \                 1             \]
    -||--- &* A12| &* ---------------------------|]
     |\A11       /          //        1 \       \|]
     |                A22 - ||A21 &* ---| &* A12||]
     \                      \\       A11/       //]
 
    [ // 1        \                 1             \
    [-||--- &* A21| &* ---------------------------| ,
    [ |\A22       /          //        1 \       \|
    [ |                A11 - ||A12 &* ---| &* A21||
    [ \                      \\       A22/       //
 
                 1             ]
    ---------------------------]
          //        1 \       \]
    A22 - ||A21 &* ---| &* A12|]
          \\       A11/       /]
 
>
> map(normal, A . subs(`&*`=`*`,InvA) );
                                   [1    0]
                                   [      ]
                                   [0    1]
 
>
> data:={A11=RandomMatrix(2,2), A12=RandomMatrix(2,2),
>        A21=RandomMatrix(2,2), A22=RandomMatrix(2,2)}:
>
> A:=Matrix(eval([[A11,A12],[A21,A22]],data)):
>
> InvA:=Matrix(eval(subs(`&*`=`.`,
>           [[(A11-A12&*A22^(-1)&*A21)^(-1),
>             -A11^(-1)&*A12&*(A22-A21&*A11^(-1)&*A12)^(-1)],
>            [-A22^(-1)&*A21&*(A11-A12&*A22^(-1)&*A21)^(-1),
>             (A22-A21&*A11^(-1)&*A12)^(-1)]] ),data) ):
>
> A . InvA;
                              [1    0    0    0]
                              [                ]
                              [0    1    0    0]
                              [                ]
                              [0    0    1    0]
                              [                ]
                              [0    0    0    1]

acer

@Hamidreza Is there a timing difference if you compute

add(add(evalf(f[i, j](i, j, x, y)), i = 1 .. 50), j = 1 .. 10);

instead of

evalf(add(add(f[i, j](i, j, x, y), i = 1 .. 50), j = 1 .. 10));

Also, could you use evalhf instead of evalf, in the former of those two?

And what is D1()? Is it always the same, or does it change for each i and j (using globals)? If unchanging, then can you pull it out of the summation as a multiplicative constant, or evalf(D()) up front? That too could make a difference.

And, taking another tack, how about first finding Q:=sum(...,i=1..N) to get a formula, finding Q50:=eval(Q,N=50), and then computing add(evalf(Q50),j=1..100) or similar?

If you raise Digits, and then recompute, you can get some evidence that those relatively small imaginary components are merely artecfacts of doing the computation with floating-point numbers (ie. as an approximation).

For this example, you might be able to convince yourself of this with an assignment of only Digits:=16.

acer

Your procedure `inertia` expects to find the global xyzc to be an indexable object with three scalar values. That's how  `inertia` uses xyzc. And it works fine, the first time you call it.

But then you assign to xyzc (at the top level) something which is a list of three equations. This occurs when you compute Equate(cog(...),[xc,yc,zc]), the result of which you assign to global xyzc. So then xyzc is no longer in a form for which some of the commands internal to `inertia` are valid or permissible. The three entries of xyzc are now equations rather than scalar values. So the next invocation to `inertia` fails with an error, going wrong when it tries to square the entries of xyzc.

For example,

> (4.99=xc)^2;
Error, (in simpl/reloprod) invalid terms in product

acer

Must a local optimization (least squares) method find the global optimum for any nonlinear fit?

You could try a few things, such as setting it up yourself as a least squares problem, and then using different starting points or using a global optimizer.

I'd like to see Maple automatically detect the presence of GlobalOptimization, and use it within NonLinearFit if found. (Using yet another global optimizer might be possible but could get tricky, as it would have to accept the same syntax as Optimization and also somehow be registered for detection.)

acer

First 287 288 289 290 291 292 293 Last Page 289 of 336