acer

32333 Reputation

29 Badges

19 years, 320 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

You need to make sure that x[k] and x[k-1] have been assigned a numeric value, for each time that they are compared and for the value of k at that moment. Make sure that you stick with either a scheme with indexed x[k],x[k-1],etc or a scheme with x,xnew,xold,etc. You were mixing x and indexed x[k], which wouldn't work.

Also, you indicated that you only wanted to print the message if convergence failed for all k from 1 to N, so put it after the loop and not inside the loop.

NR2:=proc(f::mathfunc,x0::complex,N::posint,eps)
> local x,k:
>   x[0] := x0:
>   for k to N do:
>     x[k] := evalf( x[k-1]-f(x[k-1])/D(f)(x[k-1]) );
>   end do;
>   if abs(x[N]-x[N-1]) >= eps then
>     printf("Convergence has not been achieved after %a iterations!\n",N);
>   else
>     return x[N];
>   end if;
> end proc:
>
> f:= x-> x^5-1:
>
> NR2(f,0.6+I*0.6,10,0.00001);
Convergence has not been achieved after 10 iterations!
> NR2(f,0.2+I*0.6,10,0.00001);
                         0.3090169944 + 0.9510565163 I

Side tip: maple's for-loop counters finish with a value incremented one step more than the last used value, when they have finished. For example, a for-loop counting k from 1 to 10 will have value 11 after it's finished. This matters, if you plan to refer to x[k] after it's finished. Notice that I referred to x[N] after the loop. I could also have referred to x[k-1]=x[10] but not to x[k]=x[11] which is unassigned.

Lastly, Robert's suggestion to use evalf was so that a large (potentially huge) symbolic expression did not accumulate via the iterative process. Using evalf can cure that, but only if it's done prior to assigning to x or x[k]. You had it done only as a separate task afterwards. I put it right in the iterative step.

acer

You could throw in this below as an option to DEplot. I used the layout palette to obtain the typesetting incantation for x-dot.

labels=[t,typeset(`#mover(mi("x"),mrow(mo("⁢"),mo(".")))`)]

acer

You could throw in this below as an option to DEplot. I used the layout palette to obtain the typesetting incantation for x-dot.

labels=[t,typeset(`#mover(mi("x"),mrow(mo("⁢"),mo(".")))`)]

acer

I find problems like this can be tough to do with Maple.

Ferr:=-10*(.7845815999*u2+3.141592654)*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*cos(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2/((100+fout)^(1/2)*(.1998118316*u2+1))+200*(.7845815999*u2+3.141592654)^2*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)*(.1998118316*u2+1)^2)+10*(.7845815999*u2+3.141592654)*cosh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)^(1/2)*(.1998118316*u2+1)):
plots:-implicitplot(Ferr,u2=0..50,fout=-1..1,numpoints=30000, gridlines=true);

These look right, judging from the graph, for the maximum and minimum points,

> Optimization:-Maximize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=5..50,fout=-1..1);

[0.00826786487008719304,
    [u2 = 13.6048493803282895, fout = 0.00826786487008719304]]

> Optimization:-Minimize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=0..10,fout=-1..1);

[-0.0594716927922686461,
    [u2 = 1.19113556326925552, fout = -0.0594716927922686461]]

Maple seemed to need a (feasible?) initial point in order to proceed above.

acer

I find problems like this can be tough to do with Maple.

Ferr:=-10*(.7845815999*u2+3.141592654)*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*cos(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2/((100+fout)^(1/2)*(.1998118316*u2+1))+200*(.7845815999*u2+3.141592654)^2*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)*(.1998118316*u2+1)^2)+10*(.7845815999*u2+3.141592654)*cosh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)^(1/2)*(.1998118316*u2+1)):
plots:-implicitplot(Ferr,u2=0..50,fout=-1..1,numpoints=30000, gridlines=true);

These look right, judging from the graph, for the maximum and minimum points,

> Optimization:-Maximize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=5..50,fout=-1..1);

[0.00826786487008719304,
    [u2 = 13.6048493803282895, fout = 0.00826786487008719304]]

> Optimization:-Minimize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=0..10,fout=-1..1);

[-0.0594716927922686461,
    [u2 = 1.19113556326925552, fout = -0.0594716927922686461]]

Maple seemed to need a (feasible?) initial point in order to proceed above.

acer

The term least squares is used to refer to a method for solving various different problems. Roughly, it means minimizing a sum of squares (usually of differences).

In this case, you indicated that you wanted to use it as a method for finding a line of best fit. The two choices of routine that I showed can both serve this purpose of fitting a line to data. The results they returned are both the equations of a line, ie. p*t+q , which is the form you requested. (I couldn't make it p*x+q because you had already assigned to the name x.)

But there is also, for example, least squares as a means of solving an overdetermined system of linear equations. Indeed, this can be the way that the abovementioned fitting computation can be done, behind the scenes. If you really wanted to, you could figure out how to use your data to construct such an overdetermined linear system, and then call Optimization:-LSSolve on it, and then re-interpret the Vector result to get the equation of the line. I guessed that you'd prefer having one of those two fitting routines do all that bookkeeping for you.

acer

The term least squares is used to refer to a method for solving various different problems. Roughly, it means minimizing a sum of squares (usually of differences).

In this case, you indicated that you wanted to use it as a method for finding a line of best fit. The two choices of routine that I showed can both serve this purpose of fitting a line to data. The results they returned are both the equations of a line, ie. p*t+q , which is the form you requested. (I couldn't make it p*x+q because you had already assigned to the name x.)

But there is also, for example, least squares as a means of solving an overdetermined system of linear equations. Indeed, this can be the way that the abovementioned fitting computation can be done, behind the scenes. If you really wanted to, you could figure out how to use your data to construct such an overdetermined linear system, and then call Optimization:-LSSolve on it, and then re-interpret the Vector result to get the equation of the line. I guessed that you'd prefer having one of those two fitting routines do all that bookkeeping for you.

acer

As already mentioned above, the lowess option of ScatterPlot does a form of weighted least squares. And a Vector of weights may be provided to NonlinearFit. It may be useful to think about the differences of these two approaches. An interesting issue is the possible availability of the fitted function and all its computed parameter values.

The way to supply weights to NonlinearFit is clear from its help-page which describes the weights option for this. I don't quite inderstand how those weights are then used, as weights don't seem to be an option for Optimization:-LSSolve. I understand that in weighted least squares problems with data errors it is usual for such weights to be taken using variance of the data. But I don't know exactly how the Maple solver works here. What I suspect is that xerrors and yerrors optional parameters of ScatterPlot may be used to compute weights to be passed on to NonlinearFit. I haven't confirmed this.

It's not clear from the ScatterPlot help-page exactly how the weights for lowess smoothing are chosen. Its three options related to the lowess smoothing are degree, robust, and lowess. It's not clear from that help-page in what way (if any) the xerrors or yerrors options may tie into weighting. I suspect that the don't relate at all. And then there is the question of whether a formulaic fitting result is wanted, since the lowess method will not make that available. The lowess method uses a series of weighted least squares for different points, where weights are used to modify the influence of near neighboring points (rather than to correct for measurement uncertainty directly). I now believe that this is not what the original poster wants.

So here's a question. When passing xerrors and yerrors data to ScatterPlot, when supplied with the fit option, is estimated variance of that extra data used to produce the weights which are then passed along to NonlinearFit? Tracing the Maple computation in the debugger might show whether this is true. If it is, then it may be possible to extract the method for doing it "by hand". In such a way, it may be possible to extract the parameter values that result from the nonlinear fit.

I know that, when calling ScatterPlot with the fit option, Statistics:-NonlinearFit is called, and that Optimization:-LSSolve is also called. It remains to figure out exactly how xerrors and yerrors are used, and whether they modify the above to produce weights for NonlinearFit.

acer

As already mentioned above, the lowess option of ScatterPlot does a form of weighted least squares. And a Vector of weights may be provided to NonlinearFit. It may be useful to think about the differences of these two approaches. An interesting issue is the possible availability of the fitted function and all its computed parameter values.

The way to supply weights to NonlinearFit is clear from its help-page which describes the weights option for this. I don't quite inderstand how those weights are then used, as weights don't seem to be an option for Optimization:-LSSolve. I understand that in weighted least squares problems with data errors it is usual for such weights to be taken using variance of the data. But I don't know exactly how the Maple solver works here. What I suspect is that xerrors and yerrors optional parameters of ScatterPlot may be used to compute weights to be passed on to NonlinearFit. I haven't confirmed this.

It's not clear from the ScatterPlot help-page exactly how the weights for lowess smoothing are chosen. Its three options related to the lowess smoothing are degree, robust, and lowess. It's not clear from that help-page in what way (if any) the xerrors or yerrors options may tie into weighting. I suspect that the don't relate at all. And then there is the question of whether a formulaic fitting result is wanted, since the lowess method will not make that available. The lowess method uses a series of weighted least squares for different points, where weights are used to modify the influence of near neighboring points (rather than to correct for measurement uncertainty directly). I now believe that this is not what the original poster wants.

So here's a question. When passing xerrors and yerrors data to ScatterPlot, when supplied with the fit option, is estimated variance of that extra data used to produce the weights which are then passed along to NonlinearFit? Tracing the Maple computation in the debugger might show whether this is true. If it is, then it may be possible to extract the method for doing it "by hand". In such a way, it may be possible to extract the parameter values that result from the nonlinear fit.

I know that, when calling ScatterPlot with the fit option, Statistics:-NonlinearFit is called, and that Optimization:-LSSolve is also called. It remains to figure out exactly how xerrors and yerrors are used, and whether they modify the above to produce weights for NonlinearFit.

acer

I see what you are after, now. As far as I know the x- and y-errors are not used in the fitting calculation, even when using the lowess (weighted least squares) smoothing. But it seems (now, to me) that you are after a statistical (or stochastic) model, and not the sort of deterministic formulaic model that NonlinearFit gives.

The sort of regression analysis of time series data that you describe (and which was hinted at in the image URL you posted) isn't implemented directly in Maple as far as I know. If you have access to a numeric library like NAG then you might be able to get what you are after using a GARCH process or similar from their g13 routines.

Do you have a URL for that Origin software? I am curious about what they might document, for any routine of theirs which does what you describe.

acer

I see what you are after, now. As far as I know the x- and y-errors are not used in the fitting calculation, even when using the lowess (weighted least squares) smoothing. But it seems (now, to me) that you are after a statistical (or stochastic) model, and not the sort of deterministic formulaic model that NonlinearFit gives.

The sort of regression analysis of time series data that you describe (and which was hinted at in the image URL you posted) isn't implemented directly in Maple as far as I know. If you have access to a numeric library like NAG then you might be able to get what you are after using a GARCH process or similar from their g13 routines.

Do you have a URL for that Origin software? I am curious about what they might document, for any routine of theirs which does what you describe.

acer

No. evalf(3/4) will give as many zeros as makes sense at the current Digits setting.

I suspect that your fundamental difficulty lies in thinking that 0.75 is somehow the best (exact, judging by your followup) floating-point representation of 3/4 the exact rational. What I tried to explain earlier is that 0.75 is merely one of many possible representations of an approximation to an exact value. It is not, in itself, exact.

What I tried to argue was that in some sense the number of trailing zeros is an indicator of how accurately the system knows the floating-point value. I'm not actually saying that this is why Maple behaves this way. (It isn't, really. That's why the explanation breaks down for 3/4. as your first example. To get such careful accuracy and error handling one would have to go to a special package such as the two I mentioned above.) But this behaviour for conversion (approximation) of exact rationals via evalf can be somewhat useful, because it has a somewhat natural interpretation in terms of accuracy.

The idea is that when you write 0.75000 you are claiming something about the accuracy of the approximation -- namely that it is accurate to within 0.000005 (or half an ulp). Similarly, writing 0.7500000000 makes an even stronger claim about the accuracy. So, if you start off with the exact value 3/4 then how many zeros should it get, for its floating-point approximation? There's not much sense in giving it more zeros that is justified by the current working precision, and so Maple gives a number of trailing zeros that reflects the current value of Digits (depending on how many nonzero leading digits precede them, of course).

acer

No. evalf(3/4) will give as many zeros as makes sense at the current Digits setting.

I suspect that your fundamental difficulty lies in thinking that 0.75 is somehow the best (exact, judging by your followup) floating-point representation of 3/4 the exact rational. What I tried to explain earlier is that 0.75 is merely one of many possible representations of an approximation to an exact value. It is not, in itself, exact.

What I tried to argue was that in some sense the number of trailing zeros is an indicator of how accurately the system knows the floating-point value. I'm not actually saying that this is why Maple behaves this way. (It isn't, really. That's why the explanation breaks down for 3/4. as your first example. To get such careful accuracy and error handling one would have to go to a special package such as the two I mentioned above.) But this behaviour for conversion (approximation) of exact rationals via evalf can be somewhat useful, because it has a somewhat natural interpretation in terms of accuracy.

The idea is that when you write 0.75000 you are claiming something about the accuracy of the approximation -- namely that it is accurate to within 0.000005 (or half an ulp). Similarly, writing 0.7500000000 makes an even stronger claim about the accuracy. So, if you start off with the exact value 3/4 then how many zeros should it get, for its floating-point approximation? There's not much sense in giving it more zeros that is justified by the current working precision, and so Maple gives a number of trailing zeros that reflects the current value of Digits (depending on how many nonzero leading digits precede them, of course).

acer

The comments so far are very welcome.

I'll add one or two myself.

I was originally thinking just of a quick data sheet that could assist people who are aquiring systems for running Maple (some individual machines, some for Maple computer labs, etc). Comments by Roman and Jacques bring home the point that a good performance suite would allow tracking of Maple's performance across releases. That could be useful information.

A few more subtleties of a benchmark suite: Some parts could have tasks split by restart (or, maybe better, by wholly fresh session), to minimize interference amongst tasks as memory allocation grew and memeory management costs took effect. OK.

But some other parts might deliberately involve lots of tasks together, because that might get closer to typical usage.

There is also the question of Maple performance over  very long continuous durations, possibly with large memory allocation. There's an active thread in comp.soft-sys.math.maple related to this.

Speaking of memory, there is also the question of memory fragmentation. Maple seems to not do best when contiguous memory hardware rtables are allocated and then unreferenced (ie. when they become collectible as garbage). The collected memory is not always available in larger free blocks, in current Maple, due to fragmentation of the freed space. I have heard proposals that garbage collection in Maple might be altered so as to also move memory chunks that were still in use. Such memory packing might release larger free contiguous blocks. Some similarity might be had with respect to Matlab's `pack` command, as far as the final effect (if not the means) goes of such memory consolidation.

The more I think about, the more I see that the two purposes would be better having very different and completely distinct sources: one simple set of codes to show the relative performance of Maple across OS/hardware, and another more sophisticated suite for long-term measurement of Maple as it develops.

acer

Hi Bill,

Estimated relative performance of Maple across different operating systems and architectures is one of the ideas behind this post about a possible Maple benchmark. The question that I mostly had in mind when posting that was: which platform does Maple perform best upon?

Others noted that there could be some good other benefits too. It might illustrate how different releases of Maple performed, on the same configuration. That could lead to insight about what's improved, what's deteriorated, and where subsequent improvement efforts for Maple itself could be well spent.

So maybe it would help you a bit, if we could summarize some of the differences in Maple on various platforms.

trunc(evalhf(Digits)) is 14 on MS-Windows, and 15 elsewhere. That's the cut-off value of the Digits Maple environment variable, above which quite a bit of modern Maple will use software floats. Below that threshold those parts of Maple (LinearAlgebra, Optimization, Statistics, evalf/Int) can use hardware double-precision floats and compute much faster (and without producing so much software garbage, which must be managed). It's also the working precision at which Maple's faster floating-point `evalhf` interpreter operates. So,, on MS-Windows, Maple's cut-off for these things is 1 decimal digit of precision less. This cut-off value doesn't really affect exact symbolic computations, though.

I have heard reports, but never seen hard figures, that MS-Windows is by far the leading platform for sales of Maple. This makes sense to me, especially with respect to my subjective experiences of slightly superior "look & feel" of Maple on Windows.

Some high-difficulty (non-officially supported) tweaking of Maple is easier on Linux. See here, and here.

This is an appropriate moment to mention that Maple's Classic graphical user interface is not available on OSX for Maple 10 & 11. It is not officially supported on 64bit Linux either, but here is a post that shows how it can be done.

You might also be interested in this post, which briefly discusses some performance differences between 32bit and 64bit Maple running on the same Linux machine and OS. That also arose briefly in this thread, and is something else into which a good Maple benchmark suite might provide insight.

I've noticed that running multiple invocations of Maple (all of Maple, separatey and concurrently, not just multiple worksheets or Documents) is handled very much better by Linux than by Windows. Also, I have seen Windows and OSX machines suffer more when hit hard by highly resource intensive Maple calculations. For example, when running on a network, the Windows and OSX machines seem much more likely to lose remote drive mounts and network services. Those are operating system distinctions, and not apsects of Maple implementation. They may, or may not, matter to you.

I'll state my subjective preference: a 64bit Linux distribution that is supported by Maplesoft and from which one can also install optional 32bit runtime OS components. For Maple 11, the 64bit SuSE 10 distributuon might be OK, though I have not used it. On the hardware side, I'd go for a machine that could run such an OS, either multi-core Athlon64 (X2) or Intel Core2 Duo.

acer

First 541 542 543 544 545 546 547 Last Page 543 of 591