acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Note that you are computing a symbolic result, not a pure floating-point result. Actually, it's a mixed float-symbolic problem, which often is a special kind of animal. And the result will be very lengthy. Note, too, that pure floating-point Matrix exponentiation should be pretty fast.

So perhaps you should ask yourself: what do you expect to do with such a result? Often, such lengthy symbolic results don't give much insight on their own. One often ends up numerically instantiating (possibly many) values for the symbol (here, the `t`) in some other calculation or plot.

By my rough estimation, one can do about 1500 such 10x10 complex[8] pure floating-point Matrix exponentials in the same time that one can do the given symbolic example.

So rather than try for the general univariate, symbolic result (which might just end up being evaluated at many individual t values) perhaps it would be more efficient to alter one's methodology and compute the many individual purely floating-point Matrix exponentials directly.

acer

Try passing `plot` the operator F1, instead of the expression that is returned by F1(t). Ie,

plot(F1, -2..2);

I also saw quick success (even using the expression form) after simplifying the integrand (with `simplify`) following a conversion of the .2 to exact rational 2/10, or injecting a lower tolerance option into the evalf@Int call. The times when it was faster than your original call (by about a factor of 5) were pretty much each the same.

So, that can make plotting F1 faster, and fill those gaps. But it's still not very fast to plot F1 over a range of x. That's because computing F1 at each value of x value involves a separate infinite oscillatory numerical integral. There isn't a good general scheme for such in Maple yet.

acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

So, first you are doing this?

a:=readdata(`testdata.txt`,string,16):

If you've done that, then you could extract the contents of each list in `a` by replacing your lengthy command,

seq(op(i,op(1,a)),i=1..nops(op(1,a)));

with simply this,

op(a[1]);

As an alternative to using StringTools to remove the W characters, perhaps try something like this,

a:=seq(map(t->`if`(t="W",NULL,parse(t)),T),
       T in readdata(`testdata.txt`,string,16)):

That gives you a sequence of lists. Remember: you can produce a sequence from a list by simply applying op to it. Ie, at that point,

op(a[5]); # sequence, from 5th line/list in `a`

But I can't see why you need to convert the lists to sequences right away. A list is often just as easy to manipulate. I understand, that you intend to somehow plot the data.


acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

Maybe this bug could be alleviated by changing some instance of a call to `normal` inside the internal routines used by JordanForm to instead be a call to `Normalizer`. (I didn't debug in order to figure out for which lines, if any, this might be true.)

> restart:

> unprotect(normal):
> _orig_normal:=eval(normal):
> normal:=x->Algebraic:-Normal(x):

> A:=Matrix([[1,sqrt(2),-sqrt(6)],[sqrt(2),2,sqrt(3)],[-sqrt(6),sqrt(3),0]]):

> LinearAlgebra:-JordanForm(A);
                                [-3    0    0]
                                [            ]
                                [ 0    3    0]
                                [            ]
                                [ 0    0    3]
 
> Q1:=LinearAlgebra:-JordanForm(A, output='Q'):
> Q2,J2:=LinearAlgebra:-JordanForm(A, output=['Q','J']):

> normal:=eval(_orig_normal):
> protect(normal):

> simplify(Q1);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2^(-1) . A . Q2 - J2);
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

The idea is that, for a Matrix with radicals present, one might first assign `Normalizer` to be a routine capable of handling radicals (eg. `radnormal`, or Algebraic:-Normal, or `simplify`).

acer

Maybe this bug could be alleviated by changing some instance of a call to `normal` inside the internal routines used by JordanForm to instead be a call to `Normalizer`. (I didn't debug in order to figure out for which lines, if any, this might be true.)

> restart:

> unprotect(normal):
> _orig_normal:=eval(normal):
> normal:=x->Algebraic:-Normal(x):

> A:=Matrix([[1,sqrt(2),-sqrt(6)],[sqrt(2),2,sqrt(3)],[-sqrt(6),sqrt(3),0]]):

> LinearAlgebra:-JordanForm(A);
                                [-3    0    0]
                                [            ]
                                [ 0    3    0]
                                [            ]
                                [ 0    0    3]
 
> Q1:=LinearAlgebra:-JordanForm(A, output='Q'):
> Q2,J2:=LinearAlgebra:-JordanForm(A, output=['Q','J']):

> normal:=eval(_orig_normal):
> protect(normal):

> simplify(Q1);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2^(-1) . A . Q2 - J2);
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

The idea is that, for a Matrix with radicals present, one might first assign `Normalizer` to be a routine capable of handling radicals (eg. `radnormal`, or Algebraic:-Normal, or `simplify`).

acer

It is mostly explained above. If the Matrix has the symmetric or hermitian indexing function, and if the data is all of type complex(numeric) with some float present (or complex[8] or float[8] datatype, etc), then the underlying method used by the Eigenvectors command will sort the eigenvalues in the returned Vector.

And the eigenvectors are columns in the returned Matrix which correspond to entries in that returned Vector of sorted eigenvalues, so the columns are thus sorted for these cases.

This is all due to which external clapack routine is used.

The key thing is that the Eigenvectors and Eigenvalues commands do not try to automatically detect symmetry -- the symmetric of hermitian indexing function must be specificed in the data Matrix.

I think that that is ok behaviour, because with the lack of a nonsymmetric indexing function there would otherwise be no way to force use of the nonsymmetric method in the case that float data only appeared close to hermitian. An automatic test would likely involve a tolerance in the float case.

acer

It was bothering me, that I got the wrong result when I first tried to use Quantile for this. (This is what Quantile is for, in general. So it's a problem if it is weak at this.) Here's a bit of analysis.

> restart:with(Statistics):
> X:=RandomVariable('Normal'(0,1)):

> # This next goes wrong because `evalf/RootOf` doesn't tell fsolve to use a real range!
> evalf[50](Quantile(X,1-10^(-18)));               -3.9377187255341233799589551305177585550470808666755

            - 4.5754487927888755526460423197296824251197610676463 I

> # Observe that for exact arguments Quantile returns a RootOf that lacks "realness" info.
> # Really, that is a bug of Quantile (certainly not fsolve's fault).
> forget(evalf):
> T:=Quantile(X,1-10^(-18));
                                                                (1/2)
       RootOf(500000000000000000 erf(_Z) - 499999999999999999) 2     

> # Sometimes this next call (which gets to `fsolve`) will "get lucky"
> # and find the real root. But often not!
> evalf(%);
                                 8.757290342

> # Let's fix that, by injecting a real-range into the RootOf
> forget(evalf):
> subsindets(T,specfunc(anything,RootOf),t->RootOf(op(t),1..20));evalf[50](%);
                                                                     (1/2)
   RootOf(500000000000000000 erf(_Z) - 499999999999999999, 1 .. 20) 2     
             8.7572903487823150638811286221420828183378493880373

Now for some easier general purpose workarounds (read: better use of Statistics).

# Gosh, silly me. Statistics can do its own real root-finding,
# but we have to let it know that it's a float problem up front.
> forget(evalf):
> Quantile(X,1.0-10^(-18)); # notice 1.0 instead of exact 1
                                 7.391904314

> # And at default precision that answer is poor.
> # But it seems reasonable to expect to have to raise Digits above
> # 18 (at least), if we throw terms like 1.0-10^(-18) around.
> Digits:=20:
> Quantile(X,1.0-10^(-18));
                            8.7572903487832079711

> # Similarly, the 'numeric' option also tells Quantile to use floats.
> forget(evalf):
> Digits:=20:
> Quantile(X,1-10^(-18),numeric);
                            8.7572903487832079711

I ought to have come up with that last pair before.

acer

It was bothering me, that I got the wrong result when I first tried to use Quantile for this. (This is what Quantile is for, in general. So it's a problem if it is weak at this.) Here's a bit of analysis.

> restart:with(Statistics):
> X:=RandomVariable('Normal'(0,1)):

> # This next goes wrong because `evalf/RootOf` doesn't tell fsolve to use a real range!
> evalf[50](Quantile(X,1-10^(-18)));               -3.9377187255341233799589551305177585550470808666755

            - 4.5754487927888755526460423197296824251197610676463 I

> # Observe that for exact arguments Quantile returns a RootOf that lacks "realness" info.
> # Really, that is a bug of Quantile (certainly not fsolve's fault).
> forget(evalf):
> T:=Quantile(X,1-10^(-18));
                                                                (1/2)
       RootOf(500000000000000000 erf(_Z) - 499999999999999999) 2     

> # Sometimes this next call (which gets to `fsolve`) will "get lucky"
> # and find the real root. But often not!
> evalf(%);
                                 8.757290342

> # Let's fix that, by injecting a real-range into the RootOf
> forget(evalf):
> subsindets(T,specfunc(anything,RootOf),t->RootOf(op(t),1..20));evalf[50](%);
                                                                     (1/2)
   RootOf(500000000000000000 erf(_Z) - 499999999999999999, 1 .. 20) 2     
             8.7572903487823150638811286221420828183378493880373

Now for some easier general purpose workarounds (read: better use of Statistics).

# Gosh, silly me. Statistics can do its own real root-finding,
# but we have to let it know that it's a float problem up front.
> forget(evalf):
> Quantile(X,1.0-10^(-18)); # notice 1.0 instead of exact 1
                                 7.391904314

> # And at default precision that answer is poor.
> # But it seems reasonable to expect to have to raise Digits above
> # 18 (at least), if we throw terms like 1.0-10^(-18) around.
> Digits:=20:
> Quantile(X,1.0-10^(-18));
                            8.7572903487832079711

> # Similarly, the 'numeric' option also tells Quantile to use floats.
> forget(evalf):
> Digits:=20:
> Quantile(X,1-10^(-18),numeric);
                            8.7572903487832079711

I ought to have come up with that last pair before.

acer

First 462 463 464 465 466 467 468 Last Page 464 of 592