acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Maple 11.02,

> restart:
> kernelopts(version);
             Maple 11.02, X86 64 LINUX, Nov 9 2007 Build ID 330022

> Statistics:-Sample(Statistics:-RandomVariable(Poisson(10^7)),1);
Error, (in Statistics:-Sample) NE_REAL_ARG_GT:
  On entry, rlamda must not be greater than 1.0e6: rlamda = 1e+07.

> UseHardwareFloats:=false:
> Statistics:-Sample(Statistics:-RandomVariable(Poisson(10^7)),1);
                                [            7]
                                [0.9999330 10 ]

> restart:
> Digits:=trunc(evalhf(Digits))+1:

> Statistics:-Sample(Statistics:-RandomVariable(Poisson(10^7)),1);
                                [            7]
                                [0.9999993 10 ]
> rtable_options(%,'datatype');
                                   anything

> Statistics:-Sample(Statistics:-RandomVariable(Poisson(1.0*10^7)),1);
                                [            7]
                                [0.9998593 10 ]
 
> rtable_options(%,'datatype');
                                   anything
 

I'm a little surprised that the results don't have more than seven decimal digits of information.

acer

Do you really want a matrix "square root", or do you perhaps want a Cholesky decomposition?

By that I mean, do you want Matrix S such that S.S=M for Matrix M? Or do you perhaps want Matrix L such that L.Transpose(L)=M for symmetric positive-definite Matrix M? I ask simply because the latter is more common, in general practice, and we don't know your original motivating problem.

As for the difficulty of computing the general symbolic "square root", compare the eigenvalues of,

Matrix(3,3,[[a,b,c],[d,e,f],[0,0,i]]);

with those of,

Matrix(3,3,[[a,b,c],[d,e,f],[0,h,i]]);

On the chance that a Cholesky decomposition of a symmetric Matrix would do, and assuming also that you data will be purely real,

> restart:
> with(LinearAlgebra):
> M := Matrix(3,3,[[a,b,c],[b,e,f],[c,f,i]]);
                                   [a    b    c]
                                   [           ]
                              M := [b    e    f]
                                   [           ]
                                   [c    f    i]
 
> L := LUDecomposition(M,'method'='Cholesky','conjugate'=false);
        [ 1/2                                                              ]
        [a    ,                            0 ,                            0]
        [                                                                  ]
        [                            /       2\1/2                         ]
        [ b                          |e a - b |                            ]
        [---- ,                      |--------|    ,                      0]
        [ 1/2                        \   a    /                            ]
   L := [a                                                                 ]
        [                                                                  ]
        [                         /           2    2      2            \1/2]
        [ c        f a - b c      |i a e - i b  - c  e - f  a + 2 f b c|   ]
        [---- , --------------- , |------------------------------------|   ]
        [ 1/2     /       2\1/2   |                     2              |   ]
        [a        |e a - b |      \              e a - b               /   ]
        [       a |--------|                                               ]
        [         \   a    /                                               ]
 
> simplify( L.Transpose(L) - M );
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

acer

You entered,

solve({100-m*qj-qI-qj-v = 0, 100-(m-1)*qj-3*qI-ld = 0}*{qj, qI});

instead of,

solve({100-m*qj-qI-qj-v = 0, 100-(m-1)*qj-3*qI-ld = 0},{qj, qI});

acer

Instead of left-clicking, right-click to get the context-sensitive menu.

Select "Browse" from the context menu that appears when you right click.

acer

It is possible to create a Vector which uses a built-in indexing function that accomplishes this. An indexing function is a smart-access mechanism for Vectors and Matrices, as well as a control on what may be assigned into the object.

This is what LinearAlgebra:-UnitVector() can produce, which Joe illustrated. The code below shows how the Vector() constructor can also produce the same objects.

> for i to 4 do
> e[i] := Vector(4,'shape'='unit'[i]);
> end do:

> e[3];
                                      [0]
                                      [ ]
                                      [0]
                                      [ ]
                                      [1]
                                      [ ]
                                      [0]

> lprint(e[2]);
Vector[column](4,{},datatype = anything,storage = empty,
order = Fortran_order, shape = [unit[2]])

On the one hand, this can be useful if the dimension is very large (>>4 say) and one wishes to keep memory allocation down. That's because such Vectors have "empty" storage, yet produce the right values whenever any element of them is accessed or used. On the other hand, it may incur a slight but measurable extra time efficiency cost to access them many times, due to the overhead of calling the indexing function for each access. That's a typical storage versus speed dichotomy.

Calling LinearAlgebra:-UnitVector() with the 'compact'=false option will produce these objects without any indexing function, and with full explicit storage.

> for i to 4 do
> e[i] := LinearAlgebra:-UnitVector(i,4,'compact'=false);
> end do:

> e[3];
                                      [0]
                                      [ ]
                                      [0]
                                      [ ]
                                      [1]
                                      [ ]
                                      [0]
 
> lprint(e[2]);
Vector[column](4,{(2) = 1},datatype = anything,
storage = rectangular,order = Fortran_order,shape = [])

That takes more memory (not really relevant for dimension 4, naturally), but access will be faster. Also, objects created in this way do not have restrictions on what might be subsequently assigned into their entries, which may or may not be important to one's tasks. I like this method.

One can also create row Vectors.

> LinearAlgebra:-UnitVector['row'](2,4,'compact'=false);
                                 [0, 1, 0, 0]

And it's also possible to create these objects with full non-empty storage as well as an indexing function. Eg,

> V := Vector(4,'shape'='unit'[3],'storage'='rectangular'):

> lprint(V);
Vector[column](4,{(3) = 1},datatype = anything,
storage = rectangular,order = Fortran_order,
shape = [unit[3]])

This is the least useful, since it takes more memory while still disallowing any assignment that disobeys the indexing function. There's not much point to having both those aspects hold for the same object.

> V[4]:=17;
Error, unit vector can only have one non-zero entry

That error message isn't as clear as it could be.

So, all in all there is a lot of flexibility. I find that the LinearAlgebra package's UnitVector provides the most straightforward way to get all the desirable functionality.

acer

You might also look at OrthogonalSeries, if you're interested in such things. I don't think that there's a cross-reference from ?orthopoly to ?OrthogonalSeries .

acer

What are your thoughts about the following example.

> restart:

> limit(1/((x-2)*(x-1)),x=1,right);
                                   -infinity
 
> limit(1/((x-2)*(x-1)),x=1,left);
                                   infinity
 
> eval(1/((x-2)*(x-1)),x=1);
Error, numeric exception: division by zero

> MyHandler := proc(operator,operands,defVal)
> WARNING("division by zero in %1 with args %2",
>         operator,operands);
> defVal
> end proc:

> NumericEventHandler(division_by_zero=MyHandler):

> eval(1/((x-2)*(x-1)),x=1);
Warning, division by zero in ^ with args
[0, -1]
                                   -infinity

acer

Could you use the Maple GUI's embedded components for this, instead of Maplets?

acer

Isn't saclib a sort of (older, unmaintained?) symbolic algebra library? It's not clear whether you want fast hardware numerics, or faster exact symbolics than you get programming at Maple's "Library level". If it's the latter, then you might consider looking at the Maple help-page for ?OpenMaple.

acer

You wish to figure out how to do Int(y^2*exp(1)^(y^2),y) by hand?

How about a change of variables first,

> student[changevar](y=I*x,I1,x);
                             /                2
                            |      2       (-x )
                            |  -I x  exp(1)      dx
                            |
                           /

Now, how about pulling the -I out front, and then a step by parts?

acer

So, you are using MS-Word, then ? Does it allow for MathML import? (You could always try it, by pasting some in, or by using some import facility.)

To convert to presentation MathML in Maple 9.5.1, try these Maple commands,

Limit(f,x=0);

MathML['ExportPresentation'](Limit(f,x=0));

You may need to strip the double-quotes off the beginning and end of what gets printed by the second command above, to get just the MathML.

acer

In Maple 11.02,

eq:=a*(x/sqrt(x^2-1)+2*x/(Pi*sqrt(x^2-1))-1-2/(Pi*x))=1;
_EnvExplicit:=true;
sol:=[solve(eq,x)];
nops(sol);

acer

Start off with the definition of an eigenvalue. If scalar lambda[i] is an eigenvalue then there exists a nontrival (not all-zero) vector x such that,

    A . x = lambda[i] * x   

Now, supposing that A^-1 (the inverse of A) exists, multiply (from the left) both sides of that equation by A^-1. This should result in,

    x = A^-1 . lambda[i] . x

Now, multiply both sides of that equation on the left by scalar 1/lambda[i]. Realize that on the right-hand-side of the equation the scalar lambda[i] can be pulled out to the front (using linearity properties) allowing it to cancel with the 1/lambda[i] multiplicative factor. This should result in,

   1/lambda[i] * x = A^-1 . x

Interpret that according to the definition of an eigenvalue. Add a note about whether lambda[i] can be zero.

acer

I'm pretty content using the syntax highlighting of Maple source in vim. See here for a blog entry by Jacques on it.

acer

> M:=Matrix(3, 3, {(1, 1) = 1/sin(Theta)^2,\
> (1, 2) = -cos(Theta), (1, 3) = 0, \
> (2, 1) = -cos(Theta), (2, 2) = 1/sin(Theta)^2,\
> (2, 3) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 1});
                         [     1                         ]
                         [-----------    -cos(Theta)    0]
                         [          2                    ]
                         [sin(Theta)                     ]
                         [                               ]
                    M := [                    1          ]
                         [-cos(Theta)    -----------    0]
                         [                         2     ]
                         [               sin(Theta)      ]
                         [                               ]
                         [     0              0         1]
 

> evals:=Vector([solve(\
> LinearAlgebra[CharacteristicPolynomial](M,lambda),lambda)]):

> Normalizer:t->simplify(t,trig):

> evecs:=Matrix(3,3):

> evecs[1..3,1]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[1])),Vector(3),method=LU):

> evecs[1..3,2]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[2])),Vector(3),method=LU):

> evecs[1..3,3]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[3])),Vector(3),method=LU):

> evecs;
     [                             2                               2    ]
     [  cos(Theta) _t[1] sin(Theta)    cos(Theta) _t0[1] sin(Theta)     ]
     [- ---------------------------- , ----------------------------- , 0]
     [             2           4 1/2              2           4 1/2     ]
     [  (cos(Theta)  sin(Theta) )      (cos(Theta)  sin(Theta) )        ]
     [                                                                  ]
     [_t[1] ,                         _t0[1] ,                         0]
     [                                                                  ]
     [0 ,                           0 ,                           _t1[1]]


> simplify( M.evecs - evecs.LinearAlgebra[DiagonalMatrix](evals) );
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

It should also be possible to combine all three eigenvector steps into a single Matrix call, ie. something like this,

> evecs:=Matrix([seq(LinearAlgebra[Linea\
> rSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[i])),Vector(3),method=LU),i=1..3)]);

         [                              2                               2    ]
         [  cos(Theta) _t5[1] sin(Theta)    cos(Theta) _t6[1] sin(Theta)     ]
         [- ----------------------------- , ----------------------------- , 0]
         [             2           4 1/2               2           4 1/2     ]
evecs := [  (cos(Theta)  sin(Theta) )       (cos(Theta)  sin(Theta) )        ]
         [                                                                   ]
         [_t5[1] ,                         _t6[1] ,                         0]
         [                                                                   ]
         [0 ,                            0 ,                           _t7[1]]

These Vectors are each meant to represent (or span) a nullspace. You could instantiate them at values for the parameters _t5[1], etc, which did not result in the trivial all-zero Vector. That is to say, you could assign them like, _t5[1]:=1 , and so on. Or you could instantiate with 2-argument eval(), like eval(evecs,{_t5[1]=1,...}). If the spanning representative Vector had more than a single parameter then its associated eigenspace would have dimension greater than 1. And you could split those into a basis prior to instantiating at values of the paremeters.

I am not sure that LinearAlgebra[NullSpace], which conveniently returns the basis as a set of Vectors and not as a paremetrized single representative Vector (like in the above), allows as much control over Normalizer. It's necessary in general for this class of problem to set Normalizer to something strong enough to correctly identify "hidden zeros" when selecting pivots as the linear system solving stage. An example of badness might be where a pivot was chosen that contained 1-sin(x)^2-cos(x)^2 as a factor of numerator or denominator. The approach performed above avoids this danger by setting Normalizer to t->simplify(t,trig).

Note that during LU decomposition (as the initial step in exact or symbolic linear system solving) the pivot selection checks for nonzero pivots by using Testzero, where,

> eval(Testzero);
                   proc(O) evalb(Normalizer(O) = 0) end proc

Of course, one could also set one's own Testzero to whatever stronger or more reliable zero-testing check that is desired, and leave Normalizer as its default of `normal`(), but that's a story for another day.

acer

First 317 318 319 320 321 322 323 Last Page 319 of 336