acer

32348 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

@erik10 For your simpler example, the way you call NextZero is not best, The documentation describes the maxdistance optional parameter, which is relevant here. The matter is not that complicated: One may need, in general, to supply a function and a starting point and a maximal distance to be searched. I think that is a reasonable sepcification, in general. Thus I'd suggest that the maxdistance argument should not even be optional! And a call like this does succeed in finding the first root:

> RootFinding[NextZero](x -> x^3-3*2^x+3,-1000,maxdistance=2000);

                          -1.189979747

I think that the nastiness in this latest example is not so much due to the rapid oscillation as it is due the the problem of scale. The initial search point of -100 is quite far from the root cluster. This can cause problems like missing the roots, or overshoot of the first roots (which is what now seems to happen for your example with the range supplied as -100 to 100 after my code revision).

One nice aspect of the findroots procedure (if I may be immodest) is that it allows the number of tries to be controlled. The Roots command doesn't allow for that. For example,

> [findroots(sin(1/x)-x, -10, 10, maxtries=100)]:
> nops(%);

                              100

> {findroots(sin(1/x)-x, -10, 10, maxtries=1000)}:
> nops(%);

                              1000

Posing numeric problems without adequate scaling information is the mistake, I think. Trying to find the root of any general blackbox function in a floating-point framework, without any information of the problem scale (even as a search interval width) is misguided. Should the algorithm painstakingly scrutinize the very small scale, or the very large, or what?

Your description of the technique by visual inspection of the plot is slightly misleading in that it makes it sound too foolproof. Intentionally or not, the notes suggest that this is the less error-prone method which should sit on top of one's solving attempts. But, I would claim that just like any other automated solving process the plotting approach is at risk. Do you rely on "smartplot" bounds? If not, how do you know how wide to take the domain for plotting? How do you know that the plot (which is inherently discrete in essence, even if it appears like a smooth view) does not have very tight corners and roots veiled from view by the nature of the plot driver? One can defeat any plot driver, and have it to miss roots to display.

I believe that I had a (only slightly subtle) coding error in the first version. I realized it on a long drive today. Maybe the call to NextZero, insize findroots, should use maxdistance=b-start instead of maxdistance=b-`if`(...). I investigated, and went back and edited that line in the code I posted above.

ps. Please don't get the wrong impression. Yours is a very interesting question. I just don't think that there is any foolproof general answer which does not require human thought and consideration of the particularities of the problem at hand. (You once asked another fascinating question a year of so ago, about accuracy and precision. I've been thinking on it, and very slowly trying to come up with a deserving answer, since that time.)

@erik10 For your simpler example, the way you call NextZero is not best, The documentation describes the maxdistance optional parameter, which is relevant here. The matter is not that complicated: One may need, in general, to supply a function and a starting point and a maximal distance to be searched. I think that is a reasonable sepcification, in general. Thus I'd suggest that the maxdistance argument should not even be optional! And a call like this does succeed in finding the first root:

> RootFinding[NextZero](x -> x^3-3*2^x+3,-1000,maxdistance=2000);

                          -1.189979747

I think that the nastiness in this latest example is not so much due to the rapid oscillation as it is due the the problem of scale. The initial search point of -100 is quite far from the root cluster. This can cause problems like missing the roots, or overshoot of the first roots (which is what now seems to happen for your example with the range supplied as -100 to 100 after my code revision).

One nice aspect of the findroots procedure (if I may be immodest) is that it allows the number of tries to be controlled. The Roots command doesn't allow for that. For example,

> [findroots(sin(1/x)-x, -10, 10, maxtries=100)]:
> nops(%);

                              100

> {findroots(sin(1/x)-x, -10, 10, maxtries=1000)}:
> nops(%);

                              1000

Posing numeric problems without adequate scaling information is the mistake, I think. Trying to find the root of any general blackbox function in a floating-point framework, without any information of the problem scale (even as a search interval width) is misguided. Should the algorithm painstakingly scrutinize the very small scale, or the very large, or what?

Your description of the technique by visual inspection of the plot is slightly misleading in that it makes it sound too foolproof. Intentionally or not, the notes suggest that this is the less error-prone method which should sit on top of one's solving attempts. But, I would claim that just like any other automated solving process the plotting approach is at risk. Do you rely on "smartplot" bounds? If not, how do you know how wide to take the domain for plotting? How do you know that the plot (which is inherently discrete in essence, even if it appears like a smooth view) does not have very tight corners and roots veiled from view by the nature of the plot driver? One can defeat any plot driver, and have it to miss roots to display.

I believe that I had a (only slightly subtle) coding error in the first version. I realized it on a long drive today. Maybe the call to NextZero, insize findroots, should use maxdistance=b-start instead of maxdistance=b-`if`(...). I investigated, and went back and edited that line in the code I posted above.

ps. Please don't get the wrong impression. Yours is a very interesting question. I just don't think that there is any foolproof general answer which does not require human thought and consideration of the particularities of the problem at hand. (You once asked another fascinating question a year of so ago, about accuracy and precision. I've been thinking on it, and very slowly trying to come up with a deserving answer, since that time.)

@Alec Mihailovs 

I do hope that it comes across that I am trying to be precise, and not rude. However, you didn't "just" show how to use evalhf with `eval`. You did imply at some point that it was evalhf which was causing the behaviour to be like that of double precision.

You explained, "evalhf produces the same result if the calculations were done in the hardware floats", apparently  as part of the rationale for wrapping it around `eval`. That implied that evalhf is what made it get the same result as double precision. I was correcting that erroneous implication (intended by you, or not).

In Maple 10, extracting a scalar from a float[8] rtable did not produce an HFloat scalar object, but it does in Maple 15. In Maple 10 such extraction produced a software float scalar with a similar kind of length as the doubles inside a float[8] rtable. And such software floats drop accuracy under arithmetic operations at lower working precision. And so the evalhf(eval(...)) approach fails in Maple 11, when `eval` became available to run under evalhf, as I showed by example.

As for Maple 15, the evalhf wrapping may not always be necessary under certain kinds of operation (eg. arithmetic) in order to get behaviour with higher accuracy (eg. like dbl prec.) that would be obtained by software floats at the same current Digits setting,

> # Maple 15. evalhf is not needed, to get behaviour like doubles on the NLPSolve result.
> restart: 

> evalf(subs(sol[2],1/3.));

                          0.3333333333

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                        [0., [x = 3.)]]

> eval(1/x,sol[2]);

                       0.3333333333333333

As for the bundled Maple Libraries, yes they have a mix, and no it is not necessarily wrong. I did say that evalf(...,n) was not wrong, used in the right way in the right hands. It's quite OK to do evalf(expr, n) where expr is assigned to be some expression and n is assigned to be some posint. It's clear that this will not hit the corner case I mentioned where only a preformed expression sequence is passed as single argument to evalf.

I earnestly hope that this has not come off to other readers as a personal attack by me. I've written on this site before, that I hold Alec's abilities -- Maple, coding, and math -- in high regard. And clearly there was a little confusion about the particularities of the poster's maple 10 version, so it seems good that various distinctions have been aired.

ps. "complete information"... is very amusing. Thanks for that.

@Alec Mihailovs 

I do hope that it comes across that I am trying to be precise, and not rude. However, you didn't "just" show how to use evalhf with `eval`. You did imply at some point that it was evalhf which was causing the behaviour to be like that of double precision.

You explained, "evalhf produces the same result if the calculations were done in the hardware floats", apparently  as part of the rationale for wrapping it around `eval`. That implied that evalhf is what made it get the same result as double precision. I was correcting that erroneous implication (intended by you, or not).

In Maple 10, extracting a scalar from a float[8] rtable did not produce an HFloat scalar object, but it does in Maple 15. In Maple 10 such extraction produced a software float scalar with a similar kind of length as the doubles inside a float[8] rtable. And such software floats drop accuracy under arithmetic operations at lower working precision. And so the evalhf(eval(...)) approach fails in Maple 11, when `eval` became available to run under evalhf, as I showed by example.

As for Maple 15, the evalhf wrapping may not always be necessary under certain kinds of operation (eg. arithmetic) in order to get behaviour with higher accuracy (eg. like dbl prec.) that would be obtained by software floats at the same current Digits setting,

> # Maple 15. evalhf is not needed, to get behaviour like doubles on the NLPSolve result.
> restart: 

> evalf(subs(sol[2],1/3.));

                          0.3333333333

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                        [0., [x = 3.)]]

> eval(1/x,sol[2]);

                       0.3333333333333333

As for the bundled Maple Libraries, yes they have a mix, and no it is not necessarily wrong. I did say that evalf(...,n) was not wrong, used in the right way in the right hands. It's quite OK to do evalf(expr, n) where expr is assigned to be some expression and n is assigned to be some posint. It's clear that this will not hit the corner case I mentioned where only a preformed expression sequence is passed as single argument to evalf.

I earnestly hope that this has not come off to other readers as a personal attack by me. I've written on this site before, that I hold Alec's abilities -- Maple, coding, and math -- in high regard. And clearly there was a little confusion about the particularities of the poster's maple 10 version, so it seems good that various distinctions have been aired.

ps. "complete information"... is very amusing. Thanks for that.

@Alec Mihailovs One of my example's points is that your use of evalhf not relevant to how the result is obtained. The `eval` call within the `evalhf` call that you made in Maple 15 does not inherit the Digits setting (of evalhf).

As I mentioned, a result from `eval` similar to that using hardware doubles may be due to the NLPSolve result being an HFloat (and cascading of that through the arithmetic of the `eval`). But it's not because it is called from within evalhf.

For some subtle reasons (to do with a small amount of corner case examples, and so subtle that I forget them over and over again, and always have to comb email archives to re-find them...) the indexed form of `evalhf` is slightly superior to the two-argument calling sequence. I believe that's why the help page for evalf lists the former and not the latter in recent versions. It's understood, you have your own preference, and I for one will not criticize it.

[edited: ..had to dig, for this] Here's one such corner case. Some users, after getting in the habit of using the non-indexed form like evalf(expression, n) , get confused why these two calls below don't work the same way. It's not that evalf(..., n) is wrong, but more that people who aren't in the habit of using it won't run into this confusing example. (So, it may not affect an expert like yourself, but there's still a reasonable argument for suggesting one over the other to less experienced users.)

> evalf(1/3, 7);

                           0.3333333

> g:=1/3, 7;

                              1   
                              -, 7
                              3   

> evalf(g);

                        0.3333333333, 7.

I'm not a big fan of how votes change the order of posts here, either. Together with the mix and mix-ups of answers vs comments, it makes a mess of lots of threads.

@Alec Mihailovs One of my example's points is that your use of evalhf not relevant to how the result is obtained. The `eval` call within the `evalhf` call that you made in Maple 15 does not inherit the Digits setting (of evalhf).

As I mentioned, a result from `eval` similar to that using hardware doubles may be due to the NLPSolve result being an HFloat (and cascading of that through the arithmetic of the `eval`). But it's not because it is called from within evalhf.

For some subtle reasons (to do with a small amount of corner case examples, and so subtle that I forget them over and over again, and always have to comb email archives to re-find them...) the indexed form of `evalhf` is slightly superior to the two-argument calling sequence. I believe that's why the help page for evalf lists the former and not the latter in recent versions. It's understood, you have your own preference, and I for one will not criticize it.

[edited: ..had to dig, for this] Here's one such corner case. Some users, after getting in the habit of using the non-indexed form like evalf(expression, n) , get confused why these two calls below don't work the same way. It's not that evalf(..., n) is wrong, but more that people who aren't in the habit of using it won't run into this confusing example. (So, it may not affect an expert like yourself, but there's still a reasonable argument for suggesting one over the other to less experienced users.)

> evalf(1/3, 7);

                           0.3333333

> g:=1/3, 7;

                              1   
                              -, 7
                              3   

> evalf(g);

                        0.3333333333, 7.

I'm not a big fan of how votes change the order of posts here, either. Together with the mix and mix-ups of answers vs comments, it makes a mess of lots of threads.

@Alec Mihailovs I do not wish to give offense, but it looks to me as if the stated problem was with the precision of evaluation of the objective at the returned solution (as opposed to say the question of whether that solution were globally or locally optimal, etc).

In Maple 10, which the submitter mentioned using, NLPSolve does not return a result with HFloats. So not even double precision will be used under `eval`. Also, `eval` cannot be called from within `evalhf` in Maple 10.

In Maple 11, it still wouldn't be adequate, if greater working precision that the current Digits setting (eg, default of 10) were wanted:

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                               [0., [x = 3.]]

> evalf(subs(sol[2],1/x));

                                0.3333333333

> evalhf(eval(1/x,sol[2]));

                            0.333333333299999979

So, for the stated issue, in earlier Maple it's probably better to either raise Digits first. Or invoke evalhf with its indexed form, like evalf[n](...).

In Maple 15, double precision may get used, due to the presence of HFloats in the NLPSolve result.

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                    [0., [x = HFloat(3.0)]]

> evalf(subs(sol[2],1/x));

                   HFloat(0.3333333333333333)

> evalhf(eval(1/x,sol[2]));
                      0.333333333333333315

> lprint(sol);
[0., [x = HFloat(3.)]]

> kernelopts(version);

   Maple 15.00, X86 64 WINDOWS, Mar 21 2011, Build ID 582305

But if even greater working precision is required, then raising Digits above 15, or calling evalf[n](..) with n>15, may be required.

@Alec Mihailovs I do not wish to give offense, but it looks to me as if the stated problem was with the precision of evaluation of the objective at the returned solution (as opposed to say the question of whether that solution were globally or locally optimal, etc).

In Maple 10, which the submitter mentioned using, NLPSolve does not return a result with HFloats. So not even double precision will be used under `eval`. Also, `eval` cannot be called from within `evalhf` in Maple 10.

In Maple 11, it still wouldn't be adequate, if greater working precision that the current Digits setting (eg, default of 10) were wanted:

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                               [0., [x = 3.]]

> evalf(subs(sol[2],1/x));

                                0.3333333333

> evalhf(eval(1/x,sol[2]));

                            0.333333333299999979

So, for the stated issue, in earlier Maple it's probably better to either raise Digits first. Or invoke evalhf with its indexed form, like evalf[n](...).

In Maple 15, double precision may get used, due to the presence of HFloats in the NLPSolve result.

> A:=(x-3)^2:

> sol:=Optimization:-NLPSolve(A);

                    [0., [x = HFloat(3.0)]]

> evalf(subs(sol[2],1/x));

                   HFloat(0.3333333333333333)

> evalhf(eval(1/x,sol[2]));
                      0.333333333333333315

> lprint(sol);
[0., [x = HFloat(3.)]]

> kernelopts(version);

   Maple 15.00, X86 64 WINDOWS, Mar 21 2011, Build ID 582305

But if even greater working precision is required, then raising Digits above 15, or calling evalf[n](..) with n>15, may be required.

@Christopher2222 You do not have to have an existing (empty) worksheet saved, in order to open a new, empty one.

All you have to do is INTERFACE_WORKSHEET(':-display',':-text'=...) where the ... is a certain kind of long string of XML.

I don't think this avenue helps with Doug's question.

I think that this below shows that it is the kernel (not the GUI) which is preventing quit/done/stop from succeeding. It might be possible to bamboozle the kernel into thinking it was running in the commandline interface, but obvious attempts at that (like forcing a lie in the remember table of `IsWorksheetInterface`) don't seem to work here.

FromInert(_Inert_PROC(_Inert_PARAMSEQ(), _Inert_LOCALSEQ(),
                      _Inert_OPTIONSEQ(), _Inert_EXPSEQ(),
                      _Inert_STATSEQ(_Inert_STOP()),
                      _Inert_DESCRIPTIONSEQ(), _Inert_GLOBALSEQ(),
                      _Inert_LEXICALSEQ(), _Inert_EOP(_Inert_EXPSEQ())))();

Warning, done/quit/stop disabled.  Please use File->Close

What else could be tried? Hmm. Has anyone ever experimented with external-calling to Java on the libs bundled with the product? (Is there a public/private issue, with that? I'm also asking because I'd like to find the routine the GUI uses to encode embedded images -- it's not just base64.) I tried an OpenMaple help example once, and it acted as it it were CLI. Just wild thinking.

First 434 435 436 437 438 439 440 Last Page 436 of 592