acer

32470 Reputation

29 Badges

20 years, 5 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Could you use the Maple GUI's embedded components for this, instead of Maplets?

acer

Isn't saclib a sort of (older, unmaintained?) symbolic algebra library? It's not clear whether you want fast hardware numerics, or faster exact symbolics than you get programming at Maple's "Library level". If it's the latter, then you might consider looking at the Maple help-page for ?OpenMaple.

acer

You wish to figure out how to do Int(y^2*exp(1)^(y^2),y) by hand?

How about a change of variables first,

> student[changevar](y=I*x,I1,x);
                             /                2
                            |      2       (-x )
                            |  -I x  exp(1)      dx
                            |
                           /

Now, how about pulling the -I out front, and then a step by parts?

acer

So, you are using MS-Word, then ? Does it allow for MathML import? (You could always try it, by pasting some in, or by using some import facility.)

To convert to presentation MathML in Maple 9.5.1, try these Maple commands,

Limit(f,x=0);

MathML['ExportPresentation'](Limit(f,x=0));

You may need to strip the double-quotes off the beginning and end of what gets printed by the second command above, to get just the MathML.

acer

In Maple 11.02,

eq:=a*(x/sqrt(x^2-1)+2*x/(Pi*sqrt(x^2-1))-1-2/(Pi*x))=1;
_EnvExplicit:=true;
sol:=[solve(eq,x)];
nops(sol);

acer

Start off with the definition of an eigenvalue. If scalar lambda[i] is an eigenvalue then there exists a nontrival (not all-zero) vector x such that,

    A . x = lambda[i] * x   

Now, supposing that A^-1 (the inverse of A) exists, multiply (from the left) both sides of that equation by A^-1. This should result in,

    x = A^-1 . lambda[i] . x

Now, multiply both sides of that equation on the left by scalar 1/lambda[i]. Realize that on the right-hand-side of the equation the scalar lambda[i] can be pulled out to the front (using linearity properties) allowing it to cancel with the 1/lambda[i] multiplicative factor. This should result in,

   1/lambda[i] * x = A^-1 . x

Interpret that according to the definition of an eigenvalue. Add a note about whether lambda[i] can be zero.

acer

I'm pretty content using the syntax highlighting of Maple source in vim. See here for a blog entry by Jacques on it.

acer

> M:=Matrix(3, 3, {(1, 1) = 1/sin(Theta)^2,\
> (1, 2) = -cos(Theta), (1, 3) = 0, \
> (2, 1) = -cos(Theta), (2, 2) = 1/sin(Theta)^2,\
> (2, 3) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 1});
                         [     1                         ]
                         [-----------    -cos(Theta)    0]
                         [          2                    ]
                         [sin(Theta)                     ]
                         [                               ]
                    M := [                    1          ]
                         [-cos(Theta)    -----------    0]
                         [                         2     ]
                         [               sin(Theta)      ]
                         [                               ]
                         [     0              0         1]
 

> evals:=Vector([solve(\
> LinearAlgebra[CharacteristicPolynomial](M,lambda),lambda)]):

> Normalizer:t->simplify(t,trig):

> evecs:=Matrix(3,3):

> evecs[1..3,1]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[1])),Vector(3),method=LU):

> evecs[1..3,2]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[2])),Vector(3),method=LU):

> evecs[1..3,3]:=LinearAlgebra[LinearSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[3])),Vector(3),method=LU):

> evecs;
     [                             2                               2    ]
     [  cos(Theta) _t[1] sin(Theta)    cos(Theta) _t0[1] sin(Theta)     ]
     [- ---------------------------- , ----------------------------- , 0]
     [             2           4 1/2              2           4 1/2     ]
     [  (cos(Theta)  sin(Theta) )      (cos(Theta)  sin(Theta) )        ]
     [                                                                  ]
     [_t[1] ,                         _t0[1] ,                         0]
     [                                                                  ]
     [0 ,                           0 ,                           _t1[1]]


> simplify( M.evecs - evecs.LinearAlgebra[DiagonalMatrix](evals) );
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

It should also be possible to combine all three eigenvector steps into a single Matrix call, ie. something like this,

> evecs:=Matrix([seq(LinearAlgebra[Linea\
> rSolve](simplify(LinearAlgebra[Characteris\
> ticMatrix](M,evals[i])),Vector(3),method=LU),i=1..3)]);

         [                              2                               2    ]
         [  cos(Theta) _t5[1] sin(Theta)    cos(Theta) _t6[1] sin(Theta)     ]
         [- ----------------------------- , ----------------------------- , 0]
         [             2           4 1/2               2           4 1/2     ]
evecs := [  (cos(Theta)  sin(Theta) )       (cos(Theta)  sin(Theta) )        ]
         [                                                                   ]
         [_t5[1] ,                         _t6[1] ,                         0]
         [                                                                   ]
         [0 ,                            0 ,                           _t7[1]]

These Vectors are each meant to represent (or span) a nullspace. You could instantiate them at values for the parameters _t5[1], etc, which did not result in the trivial all-zero Vector. That is to say, you could assign them like, _t5[1]:=1 , and so on. Or you could instantiate with 2-argument eval(), like eval(evecs,{_t5[1]=1,...}). If the spanning representative Vector had more than a single parameter then its associated eigenspace would have dimension greater than 1. And you could split those into a basis prior to instantiating at values of the paremeters.

I am not sure that LinearAlgebra[NullSpace], which conveniently returns the basis as a set of Vectors and not as a paremetrized single representative Vector (like in the above), allows as much control over Normalizer. It's necessary in general for this class of problem to set Normalizer to something strong enough to correctly identify "hidden zeros" when selecting pivots as the linear system solving stage. An example of badness might be where a pivot was chosen that contained 1-sin(x)^2-cos(x)^2 as a factor of numerator or denominator. The approach performed above avoids this danger by setting Normalizer to t->simplify(t,trig).

Note that during LU decomposition (as the initial step in exact or symbolic linear system solving) the pivot selection checks for nonzero pivots by using Testzero, where,

> eval(Testzero);
                   proc(O) evalb(Normalizer(O) = 0) end proc

Of course, one could also set one's own Testzero to whatever stronger or more reliable zero-testing check that is desired, and leave Normalizer as its default of `normal`(), but that's a story for another day.

acer

See the help-page ?interface and scroll down to the item rtablesize. That specifies the largest size Matrix/Vector which displays in full.

For example, you could do,

interface(rtablesize=25):

This is also described in the second item in the Description in the ?Matrix help-page.

acer

> expr:=a^(3/2)*(1/a)^(3/2);
                                    (3/2)      (3/2)
                           expr := a      (1/a)
 
> simplify(expr) assuming a>0;
                                       1
 
> simplify(expr) assuming a<0;
                                      -1

> simplify(expr,symbolic);
                                       1

See the 5th item in the help-page for `^` (type in ?^).

> (-1)^(3/2);
                                      -I
> exp((3/2)*ln(-1));
                                      -I

acer

I get something not quite as you expected, for the integral from 0..infinity, under certain assumptions about k, T, and m.

I assume that you meant T instead of t, in the expected result.

> expr:=(4*v*(m/(2*Pi*k*T))^(3/2)*Pi*v^2)*exp(-m*v^2/(2*k*T)):

> sol:=int(expr,v=0..infinity):
> sol := simplify(sol) assuming m/(Pi*k*T)>0;

                                        1/2 / m \1/2
                                 2 k T 2    |---|
                                            \k T/
                          sol := -------------------
                                           1/2
                                       m Pi
 
> expected:=sqrt(8*k*T/(Pi*m));
                                        1/2 /k T \1/2
                         expected := 2 2    |----|
                                            \Pi m/
 
> simplify(sol-expected) assuming T>0, k>0, m>0;
                                       0

acer

> p:=t^6+t^3+1:
> sol:=convert(simplify([solve(p)]),expln);

                                                   1/2
sol := [exp(2/9 I Pi), -1/2 exp(2/9 I Pi) + 1/2 I 3    exp(2/9 I Pi),
 
                                1/2
    -1/2 exp(2/9 I Pi) - 1/2 I 3    exp(2/9 I Pi), exp(-2/9 I Pi),
 
                                 1/2
    -1/2 exp(-2/9 I Pi) - 1/2 I 3    exp(-2/9 I Pi),
 
                                 1/2
    -1/2 exp(-2/9 I Pi) + 1/2 I 3    exp(-2/9 I Pi)]

So, now `sol` is a list of the solutions, in your form where possible. You can check them by evaluating polynomial `p` at t=x, for each member x of the solution list.

> seq(simplify(eval(p,t=x)),x in sol);
                               0, 0, 0, 0, 0, 0

acer

The very first link in results of a web search for jacobi iterative was an intro (with matlab code, which could be translated to maple). But there were many good hits, since it's a very simple and standard technique.

Are you having difficulty understanding the method, or in writing maple code to implement it? If the latter, then post what you've got so far.

acer

If you use contours=[1,0.83,0.81] then does it appear to you as if the contours are shrinking to the point indicated by the contour at height 101/128 ?

Try plot3d(f,x=-2..2,y=-2..2,axes=boxed) for another visual of it. It should appear as if you are asking for the contour about the local minimum (3/4,-7/16). It's possible that contourplot is not showing the single black dot at (3/4,-7/16), which you could test by display(v) alone.

acer

Sorry, if this was a homework question.

acer

First 319 320 321 322 323 324 325 Last Page 321 of 337