acer

33188 Reputation

29 Badges

20 years, 204 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

"MMA", sometimes given as "Mma", is a popular abbreviation for Mathematica,

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

Are N.T.'s papers on these topics purely for numeric (and not symbolic) computation? Maple doesn't seem too slow for that. It seems to be the symbolic computation and interpolation (even though here using very quickly computed float eigenvalues) that gets bogged down.

acer

Note that you are computing a symbolic result, not a pure floating-point result. Actually, it's a mixed float-symbolic problem, which often is a special kind of animal. And the result will be very lengthy. Note, too, that pure floating-point Matrix exponentiation should be pretty fast.

So perhaps you should ask yourself: what do you expect to do with such a result? Often, such lengthy symbolic results don't give much insight on their own. One often ends up numerically instantiating (possibly many) values for the symbol (here, the `t`) in some other calculation or plot.

By my rough estimation, one can do about 1500 such 10x10 complex[8] pure floating-point Matrix exponentials in the same time that one can do the given symbolic example.

So rather than try for the general univariate, symbolic result (which might just end up being evaluated at many individual t values) perhaps it would be more efficient to alter one's methodology and compute the many individual purely floating-point Matrix exponentials directly.

acer

Try passing `plot` the operator F1, instead of the expression that is returned by F1(t). Ie,

plot(F1, -2..2);

I also saw quick success (even using the expression form) after simplifying the integrand (with `simplify`) following a conversion of the .2 to exact rational 2/10, or injecting a lower tolerance option into the evalf@Int call. The times when it was faster than your original call (by about a factor of 5) were pretty much each the same.

So, that can make plotting F1 faster, and fill those gaps. But it's still not very fast to plot F1 over a range of x. That's because computing F1 at each value of x value involves a separate infinite oscillatory numerical integral. There isn't a good general scheme for such in Maple yet.

acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

Maybe this bug could be alleviated by changing some instance of a call to `normal` inside the internal routines used by JordanForm to instead be a call to `Normalizer`. (I didn't debug in order to figure out for which lines, if any, this might be true.)

> restart:

> unprotect(normal):
> _orig_normal:=eval(normal):
> normal:=x->Algebraic:-Normal(x):

> A:=Matrix([[1,sqrt(2),-sqrt(6)],[sqrt(2),2,sqrt(3)],[-sqrt(6),sqrt(3),0]]):

> LinearAlgebra:-JordanForm(A);
                                [-3    0    0]
                                [            ]
                                [ 0    3    0]
                                [            ]
                                [ 0    0    3]
 
> Q1:=LinearAlgebra:-JordanForm(A, output='Q'):
> Q2,J2:=LinearAlgebra:-JordanForm(A, output=['Q','J']):

> normal:=eval(_orig_normal):
> protect(normal):

> simplify(Q1);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2^(-1) . A . Q2 - J2);
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

The idea is that, for a Matrix with radicals present, one might first assign `Normalizer` to be a routine capable of handling radicals (eg. `radnormal`, or Algebraic:-Normal, or `simplify`).

acer

First 477 478 479 480 481 482 483 Last Page 479 of 607