epostma

1579 Reputation

19 Badges

17 years, 49 days
Maplesoft

Social Networks and Content at Maplesoft.com

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity


These are replies submitted by epostma

I just talked to one of our control theory experts. He says: "In fact, since his AR model is linear, the Kalman filter gives the best possible solution among all estimators as long as the random term is Gaussian.  (The Kalman filter is essentially applied to estimation problems with Gaussian noise). If the noise is not Gaussian the Kalman filter is still usable but it is suboptimal. "

If you would happen to have access to the Control Design Toolbox, you could use the ?ControlDesign,Kalman procedure. That would solve your potential implementation problems.

Hope this helps,

Erik Postma
Maplesoft.

I just talked to one of our control theory experts. He says: "In fact, since his AR model is linear, the Kalman filter gives the best possible solution among all estimators as long as the random term is Gaussian.  (The Kalman filter is essentially applied to estimation problems with Gaussian noise). If the noise is not Gaussian the Kalman filter is still usable but it is suboptimal. "

If you would happen to have access to the Control Design Toolbox, you could use the ?ControlDesign,Kalman procedure. That would solve your potential implementation problems.

Hope this helps,

Erik Postma
Maplesoft.

Right. I entered the coefficients that I saw in that figure into Maple just to see what happens. First of all, I think there are probably more significant digits than are shown in the images, because the fit was not very good. (The expression I ended up with was (0.841e-1+25.8927*x+3.5311*x^2-1.4071*x^3-.2573*x^4+0.42419e-3*x^5+0.15e-2*x^6+0.33739e-4*x^7)/(1+.1668*x-0.517e-1*x^2-0.117e-1*x^3-0.1981e-3*x^4+0.66445e-4*x^5+0.2534e-5*x^6).)

However, irrespective of these issues, the interpolant that MathCad finds has two asymptotes in the range that you're interested in: one at (about) 4.15 and one at (about) 13.0. You can see that in the following graph:

I would think that this is a more serious issue than the size of the residuals - you may have a good fit at the data points that you specify, but in between those data points the interpolant goes all the way to infinity and back !

So I would still suggest the use of an interpolant that has only one root - possibly just (x - a). In order to get a smaller residual, you can use some of the coefficients that were "freed up" by not using them in the denominator to increase the degree of the numerator. For example, you could use a 12th degree polynomial divided by (x - a). This gives a standard deviation of the residual of 0.12.

(Note that reducing the residual to these sizes is only useful if your data is actually that accurate - but I think it is indeed quite accurate, right?)

Hope this helps,

Erik Postma
Maplesoft.

Right. I entered the coefficients that I saw in that figure into Maple just to see what happens. First of all, I think there are probably more significant digits than are shown in the images, because the fit was not very good. (The expression I ended up with was (0.841e-1+25.8927*x+3.5311*x^2-1.4071*x^3-.2573*x^4+0.42419e-3*x^5+0.15e-2*x^6+0.33739e-4*x^7)/(1+.1668*x-0.517e-1*x^2-0.117e-1*x^3-0.1981e-3*x^4+0.66445e-4*x^5+0.2534e-5*x^6).)

However, irrespective of these issues, the interpolant that MathCad finds has two asymptotes in the range that you're interested in: one at (about) 4.15 and one at (about) 13.0. You can see that in the following graph:

I would think that this is a more serious issue than the size of the residuals - you may have a good fit at the data points that you specify, but in between those data points the interpolant goes all the way to infinity and back !

So I would still suggest the use of an interpolant that has only one root - possibly just (x - a). In order to get a smaller residual, you can use some of the coefficients that were "freed up" by not using them in the denominator to increase the degree of the numerator. For example, you could use a 12th degree polynomial divided by (x - a). This gives a standard deviation of the residual of 0.12.

(Note that reducing the residual to these sizes is only useful if your data is actually that accurate - but I think it is indeed quite accurate, right?)

Hope this helps,

Erik Postma
Maplesoft.

I'm not an expert on this, but the system that you posted looks like it might well be tackled with a tool from control engineers called a Kalman filter. That's a technique that also works online (that is, you get a useful estimation after a few data points that is improved as you continue) but it works just as well offline (where you just need the best possible estimate given all data points at once). Moreover, it is optimal in some way.

Only works if the "update function" is linear, though. (Well, there are similar approaches to nonlinear systems, but you lose the guarantees that it's optimal and even that it will converge at all, I believe.)

Hope this helps,

Erik Postma
Maplesoft.

PS: On an unrelated note, an optimization of the code you showed above. You only need to call Fit once:

w1, w2 := op(Fit(B2*x, XX, YY, x, output = [parametervector, standarderrors])):

will do the trick.

I'm not an expert on this, but the system that you posted looks like it might well be tackled with a tool from control engineers called a Kalman filter. That's a technique that also works online (that is, you get a useful estimation after a few data points that is improved as you continue) but it works just as well offline (where you just need the best possible estimate given all data points at once). Moreover, it is optimal in some way.

Only works if the "update function" is linear, though. (Well, there are similar approaches to nonlinear systems, but you lose the guarantees that it's optimal and even that it will converge at all, I believe.)

Hope this helps,

Erik Postma
Maplesoft.

PS: On an unrelated note, an optimization of the code you showed above. You only need to call Fit once:

w1, w2 := op(Fit(B2*x, XX, YY, x, output = [parametervector, standarderrors])):

will do the trick.

Hi HerClau,

I'm not sure why, but your images of MathCad do not seem to work here. I looked at your worksheet and although the fit looks reasonable, the interpolants you have found do have a number of asymptotes in a region where the data does not suggest that there should be asymptotes. So I think you can do better than choosing the rational function interpolant that you have chosen.

I thought about this for a bit, and the data suggests really at most one asymptote, so my first thought would be to divide a polynomial by (x - a)^b, where a and b are fittable parameters. (I decided to use the Statistics:-Fit command, not CurveFitting.) This leads to complex numbers, however, so I changed that to dividing by abs(x - a)^b. This lead to a reasonable fit, but it had b almost equal to one, so I fixed b to be 1 (meaning my interpolant was now a rational function again). One trick here is that Maple, using a local optimization routine, depends strongly on the initial values, and it's always a good idea to supply those.

So to be concrete, this is what I did:

interpolant := add(c || i*k^i, i = 0 .. top_order)/(k-c || (top_order+1));
expr := Statistics:-Fit(gg, X, Y, k, initialvalues=[c0 = 0, c1=500, seq(c||i = 0, i=2..top_order), c || (top_order + 1) = -6.5]);
# The denominator of expr is now approx. k + 6.39; so the only asymptote is at k=-6.39. That seems to fit the data well.
fittedFunction := unapply(expr, k);
YC := fittedFunction~(X);
Statistics:-StandardDeviation(Y - YC); # about 0.44
max(abs~(Y - YC)); # about 0.95
So this is a somewhat better fit than you had before. Is this more or less what you were looking for?

Hope this helps,

Erik Postma
Maplesoft.

Hi HerClau,

I'm not sure why, but your images of MathCad do not seem to work here. I looked at your worksheet and although the fit looks reasonable, the interpolants you have found do have a number of asymptotes in a region where the data does not suggest that there should be asymptotes. So I think you can do better than choosing the rational function interpolant that you have chosen.

I thought about this for a bit, and the data suggests really at most one asymptote, so my first thought would be to divide a polynomial by (x - a)^b, where a and b are fittable parameters. (I decided to use the Statistics:-Fit command, not CurveFitting.) This leads to complex numbers, however, so I changed that to dividing by abs(x - a)^b. This lead to a reasonable fit, but it had b almost equal to one, so I fixed b to be 1 (meaning my interpolant was now a rational function again). One trick here is that Maple, using a local optimization routine, depends strongly on the initial values, and it's always a good idea to supply those.

So to be concrete, this is what I did:

interpolant := add(c || i*k^i, i = 0 .. top_order)/(k-c || (top_order+1));
expr := Statistics:-Fit(gg, X, Y, k, initialvalues=[c0 = 0, c1=500, seq(c||i = 0, i=2..top_order), c || (top_order + 1) = -6.5]);
# The denominator of expr is now approx. k + 6.39; so the only asymptote is at k=-6.39. That seems to fit the data well.
fittedFunction := unapply(expr, k);
YC := fittedFunction~(X);
Statistics:-StandardDeviation(Y - YC); # about 0.44
max(abs~(Y - YC)); # about 0.95
So this is a somewhat better fit than you had before. Is this more or less what you were looking for?

Hope this helps,

Erik Postma
Maplesoft.

If you want to monitor the progress, then change the last line to

hvalues := map(proc(dTv)
   print(dTv);
   return evalf[6](eval(h, [d = dTv[1], T = dTv[2]]));
end proc, dTvalues);

Hope this helps,

Erik Postma
Maplesoft.

If you want to monitor the progress, then change the last line to

hvalues := map(proc(dTv)
   print(dTv);
   return evalf[6](eval(h, [d = dTv[1], T = dTv[2]]));
end proc, dTvalues);

Hope this helps,

Erik Postma
Maplesoft.

If so, and you know a basis (maybe the generators that you specified form a basis?), then you could definitely use the LieAlgebras package. What you would need to do in that case is first decide on a numbering of your basis elements (for example by ordering first by the roots in the root system, then by the subscript), mappings that go back and forth between the "more natural" basis element representation (e.g. the Q-with-sub-and-superscripts one) and the basis element number, and rules for expressing the product of two "natural" basis elements in other "natural" basis elements. Then you can construct the multiplication table for the numbered basis elements and feed that into the LieAlgebras package. This would have the advantage that you could use its functionality, such as Centralizer to obtain the centralizer of some elements, etcetera.

If not, then the LieAlgebras package will not work. In this case I would use define, as the post that you're referring to was trying to do as well. The trick here is that the thing you want to "define" is the multiplication operator (i.e., the bracket). You can denote this with any legal Maple name, but it's maybe nicest to use a so-called neutral operator (see ?neutral ). I'll suggest &* in the following.

First one note. Maple does not support putting a vector in an exponent. I didn't realize and tried to implement the stuff below using vectors in exponents, but it doesn't accept that. Additionally, the pattern matching code does not work well with indices (subscripts). So we'll need to find a different notation for the Qsx (which would be Q[s]^x, internally) to use within Maple. Maybe Q(s, x)? That's what I'll do from here on.

You would then proceed by defining the multiplication rule:

define(`&*`, multilinear, Q(s :: integer, x :: Vector) &* Q(t :: integer, y :: Vector) = Q(s + t, x + y));

In fact, it might be best to define two operations: one of "formal" multiplication, and one of "normalization", where you enforce that certain generators are mapped to zero or maybe to linear combinations of other generators. So you could do this:

undefine(`&*`); # to undo the previous definition (or just restart;)
define(normalize, linear, normalize(Q(s :: And(integer, Range(1, infinity)), x :: Vector)) = 0, normalize(Q(s :: integer, x :: Vector)) = Q(s, x));
define(`&*`, multilinear, Q(s :: integer, x :: Vector) &* Q(t :: integer, y :: Vector) = normalize(Q(s + t, x + y)));

Now you can do this sort of stuff:

normalize(2 * Q(2, <1, -2>) - Q(0, <1, 2>));
Q(1, <1, 2>) &* Q(0, <0, -1>);
Q(1, <1, 0>) &* (Q(1, <0, 1>) &* Q(-1, <-1, 0>));
(Q(1, <1, 0>) &* Q(1, <0, 1>)) &* Q(-1, <-1, 0>);
(Q(1, <1,0>) + 2 * Q(0, <0,1>)) &* (Q(-2, <1,1>) + Q(1, <0,-1>));

which will give you the answers

-Q(0, <1, 2>)
Q(1, <1, 1>)
Q(1, <0, 1>)
0
Q(-1, <2,1>) + 2 * Q(-2, <1,2>) + 2 * Q(1, <0, 0>)

Note this is not perfect - for example, if you enter a quantity that would be normalized to zero, it will not necessarily work correctly:

Q(2, <1, 0>) &* Q(-1, <0, 2>); # should return 0 but will return Q(1, <1, 2>).

If you're concerned about this then normalize every quantity that you enter.

By the way - interesting that you call this a Lie algebra. The definition of a Lie algebra that I am familiar with requires the bracket to be anticommutative, that is, [x, x] = 0 for all algebra elements, which implies [x, y] = - [y, x].

Hope this helps,

Erik Postma
Maplesoft.

If so, and you know a basis (maybe the generators that you specified form a basis?), then you could definitely use the LieAlgebras package. What you would need to do in that case is first decide on a numbering of your basis elements (for example by ordering first by the roots in the root system, then by the subscript), mappings that go back and forth between the "more natural" basis element representation (e.g. the Q-with-sub-and-superscripts one) and the basis element number, and rules for expressing the product of two "natural" basis elements in other "natural" basis elements. Then you can construct the multiplication table for the numbered basis elements and feed that into the LieAlgebras package. This would have the advantage that you could use its functionality, such as Centralizer to obtain the centralizer of some elements, etcetera.

If not, then the LieAlgebras package will not work. In this case I would use define, as the post that you're referring to was trying to do as well. The trick here is that the thing you want to "define" is the multiplication operator (i.e., the bracket). You can denote this with any legal Maple name, but it's maybe nicest to use a so-called neutral operator (see ?neutral ). I'll suggest &* in the following.

First one note. Maple does not support putting a vector in an exponent. I didn't realize and tried to implement the stuff below using vectors in exponents, but it doesn't accept that. Additionally, the pattern matching code does not work well with indices (subscripts). So we'll need to find a different notation for the Qsx (which would be Q[s]^x, internally) to use within Maple. Maybe Q(s, x)? That's what I'll do from here on.

You would then proceed by defining the multiplication rule:

define(`&*`, multilinear, Q(s :: integer, x :: Vector) &* Q(t :: integer, y :: Vector) = Q(s + t, x + y));

In fact, it might be best to define two operations: one of "formal" multiplication, and one of "normalization", where you enforce that certain generators are mapped to zero or maybe to linear combinations of other generators. So you could do this:

undefine(`&*`); # to undo the previous definition (or just restart;)
define(normalize, linear, normalize(Q(s :: And(integer, Range(1, infinity)), x :: Vector)) = 0, normalize(Q(s :: integer, x :: Vector)) = Q(s, x));
define(`&*`, multilinear, Q(s :: integer, x :: Vector) &* Q(t :: integer, y :: Vector) = normalize(Q(s + t, x + y)));

Now you can do this sort of stuff:

normalize(2 * Q(2, <1, -2>) - Q(0, <1, 2>));
Q(1, <1, 2>) &* Q(0, <0, -1>);
Q(1, <1, 0>) &* (Q(1, <0, 1>) &* Q(-1, <-1, 0>));
(Q(1, <1, 0>) &* Q(1, <0, 1>)) &* Q(-1, <-1, 0>);
(Q(1, <1,0>) + 2 * Q(0, <0,1>)) &* (Q(-2, <1,1>) + Q(1, <0,-1>));

which will give you the answers

-Q(0, <1, 2>)
Q(1, <1, 1>)
Q(1, <0, 1>)
0
Q(-1, <2,1>) + 2 * Q(-2, <1,2>) + 2 * Q(1, <0, 0>)

Note this is not perfect - for example, if you enter a quantity that would be normalized to zero, it will not necessarily work correctly:

Q(2, <1, 0>) &* Q(-1, <0, 2>); # should return 0 but will return Q(1, <1, 2>).

If you're concerned about this then normalize every quantity that you enter.

By the way - interesting that you call this a Lie algebra. The definition of a Lie algebra that I am familiar with requires the bracket to be anticommutative, that is, [x, x] = 0 for all algebra elements, which implies [x, y] = - [y, x].

Hope this helps,

Erik Postma
Maplesoft.

Solve also comes up frequently in questions here: a very frequent issue is exact solving vs. fsolve, and also fairly frequent is the meaning of "solutions may have been lost".

Erik.

Good idea. I added links to the FGA post, but did not write anything yet.

Erik Postma
Maplesoft.

Good idea. I added links to the FGA post, but did not write anything yet.

Erik Postma
Maplesoft.

First 14 15 16 17 18 19 20 Page 16 of 22