## 625 Reputation

17 years, 60 days

## type checking...

I checked my k(x) in Maple8 and Maple10 and Maple11, and both one-sided limits were correctly evaluated. The type checking seems to be the key, along with "is(x<1)" instead of just "x<1". Note in your original proc, you wrote if type(x,numeric) then... As a consequence, you would not get a numerical value for k(sqrt(2)) since sqrt(2) is not numeric. You might use "realcons" instead of "numeric", but it still chokes the one-sided limit. My guess about why "is" seems to be necessary is because it forces Maple to try harder to check and see if x<1, which is a critical step in evaluating the one-sided limit, or maybe it is just magic.

## finding limit...

I think this does what you want: k:=proc(x); if is(x < 1) then -1 elif is(x > 1) then 1 elif x=1 then undefined else 'k(x)' fi end; limit('k(x)',x=1,right); limit('k(x)',x=1,left); plot('k(x)',x=-2..2,discont=true);

## range of coefficients for MatrixExponent...

One thing to watch our for is this: In the paper you reference above, the author uses a "physics convention" where the lambda matrices are Hermitian! Mathematicians think of the Lie algebra su(N) as consisting of skew Hermitian matrices. A real linear combination of skew Hermitians is skew Hermitian, and the matrix exponential of a (trace-zero) skew Hermitian is (special) unitary. So if you use LinearAlgebra[MatrixExponential] on your lambda matrices, you will not get something in SU(N)! I think you need to first fix your lambda matrices by multiplying them by "I" or "-I" to get in sync with Maple's MatrixExponential. Or you could exponentiate linear combintations sum(alpha[i]*lambda[i],i=1..N^2-1) where the alpha's must be pure imaginary. This seems awkward to me because I like to think of su(N) as a vector space over R, and because exp(lambda) will not be unitary. But the physics convention works, even if it is not completely in sync with MatrixExponential. Since SU(N) is compact, the exponential map from su(N) to SU(N) is sujective. It will "wrap" the su(N) around and around SU(N).

## range of coefficients for MatrixExponent...

One thing to watch our for is this: In the paper you reference above, the author uses a "physics convention" where the lambda matrices are Hermitian! Mathematicians think of the Lie algebra su(N) as consisting of skew Hermitian matrices. A real linear combination of skew Hermitians is skew Hermitian, and the matrix exponential of a (trace-zero) skew Hermitian is (special) unitary. So if you use LinearAlgebra[MatrixExponential] on your lambda matrices, you will not get something in SU(N)! I think you need to first fix your lambda matrices by multiplying them by "I" or "-I" to get in sync with Maple's MatrixExponential. Or you could exponentiate linear combintations sum(alpha[i]*lambda[i],i=1..N^2-1) where the alpha's must be pure imaginary. This seems awkward to me because I like to think of su(N) as a vector space over R, and because exp(lambda) will not be unitary. But the physics convention works, even if it is not completely in sync with MatrixExponential. Since SU(N) is compact, the exponential map from su(N) to SU(N) is sujective. It will "wrap" the su(N) around and around SU(N).

## I will eat crow...

Indeed, my statement about exp(A+B)=exp(A)exp(B) cannot be correct. This would imply that matrix multiplication is commutative. If we replace the product of matrix exponentials with a single matrix exponential of the corresponding linear combination of matrices in the Lie algebra su(N), we get something in SU(N). The exponential map does take the lie algebra to the corresponding Lie group. But in general we will not get the matrix product of the corresponding exponentiated lambdas. I would like my crow with salsa.

## I will eat crow...

Indeed, my statement about exp(A+B)=exp(A)exp(B) cannot be correct. This would imply that matrix multiplication is commutative. If we replace the product of matrix exponentials with a single matrix exponential of the corresponding linear combination of matrices in the Lie algebra su(N), we get something in SU(N). The exponential map does take the lie algebra to the corresponding Lie group. But in general we will not get the matrix product of the corresponding exponentiated lambdas. I would like my crow with salsa.

## exp(A)exp(B)=exp(A+B)...

Since exp(A)exp(B)=exp(A+B) for matrices, you should be able to form a linear combination of the lambda matrices with coefficients alpha[i]. The linear combination is in the Lie algebra, and thus you simply apply MatrixExponential only once, to the linear combination.

## exp(A)exp(B)=exp(A+B)...

Since exp(A)exp(B)=exp(A+B) for matrices, you should be able to form a linear combination of the lambda matrices with coefficients alpha[i]. The linear combination is in the Lie algebra, and thus you simply apply MatrixExponential only once, to the linear combination.

## more curry...

In your example, it would be easier to simply have h:=f-g; instead of h:=curry(f-g); so I still do not see the advantage to using "curry". If I want to change the number of arguments, I could use either h:=unapply((f-g)(a,v,w,x,y),v,w,x,y); or h:=curry(f-g,a,v,w,x,y); I guess this does save me a few keystrokes. What I am wondering is if curry is internaly different from and somehow more internally efficient than proc or unapply, or is it just a "keystroke" efficiency.

## more curry...

In your example, it would be easier to simply have h:=f-g; instead of h:=curry(f-g); so I still do not see the advantage to using "curry". If I want to change the number of arguments, I could use either h:=unapply((f-g)(a,v,w,x,y),v,w,x,y); or h:=curry(f-g,a,v,w,x,y); I guess this does save me a few keystrokes. What I am wondering is if curry is internaly different from and somehow more internally efficient than proc or unapply, or is it just a "keystroke" efficiency.

## curry...

This "curry" command is new to me, it seems bizarre, and so maybe I can learn something here. f:=(x,y)->x^2*y; g:=curry(f,2); g(z); It seems like I could more intuitively define g by g:=y->f(2,y); or g:=unapply(f(2,y),y); Does "curry" offer something more than the above two approaches, which I think are more intuitive? Using "curry" just makes me hungry for Indian food.

## curry...

This "curry" command is new to me, it seems bizarre, and so maybe I can learn something here. f:=(x,y)->x^2*y; g:=curry(f,2); g(z); It seems like I could more intuitively define g by g:=y->f(2,y); or g:=unapply(f(2,y),y); Does "curry" offer something more than the above two approaches, which I think are more intuitive? Using "curry" just makes me hungry for Indian food.

## Maple should not have psychic pretension...

A colleague who works with physicists likes to make some of the above points about context with the following example: ... T(x,y)=k*(x^2+y^2) T(r,theta)=??? ... A true mathematician, and Maple (at least for now), will respond that T(r,theta)=k*(r^2+theta^2). A physicist or psychic, who infers meaningful context, will respond that T(r,theta)=k*r^2. It is fine with me if the physicists and psychics do this, but I do not want Maple to infer syntactical context without my explicit permission, invoked with something like with(psychic); ---------- On a related side note: Try this sequence of symbols (issued from a keyboard of course...not with carpal-tunnel point-and-click!) x/y/z (a) when input is in Maple notation and (b) when input is 2D. In case (a) you will get x/(y*z), and in case (b) you get (x*z)/y. Disgusting. I had a polite discussion with my daughter's elementary school teacher about the keystrokes x/y*z (they were using numbers instead of variables, and paper-pencil not keyboard, and the division sign.. \div in LaTeX..instead of fraction bars.) She marked my daughter wrong when she parsed it as (x/y)*z. The teacher's reasoning amounted to the assumption that multiplication takes precedence over division. My reasoning was that multiplication and division are equivalent just like + and - are equivalent, and so you break ties from left to right. I did not confuse the polite discussion by telling her that I learned linear algebra out of Herstein, who writes xT instead of T(x).

## composition...

In Maple10, is(f@(g@g)=(f@g)@g) comes back as false, as you report, but in Maple11 it comes back as true. So I guess it is a bug that was fixed.

## Partial Credit MapleTA question...

Pages 4 and 5 of this link http://staff.science.uva.nl/~heck/instaptoetsen/ahlvg2.pdf show an example of a MapleTA question that gives partial credit. See page 6 for some discussion. So it is not altogether true that MapleTA answers can only be right or wrong. Sorry I cannot help you on the chaining question.
﻿