acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

That is not too hard to set up with numeric integration. You can even make a "black box" procedure which accepts a numeric value for c and returns the float approximation of the integral. That could be plotted as a function of c, etc.

(If taking a numeric approach then I'd recommend trying either one of the Monte-Carlo or the _CubaCuhre methods for evalf(Int(...)) since the integrand will be discontinuous at the boundary, because outside the region it will be zero. Most other methods rely on smoothness, and the cost of splitting at the implicit boundary will not be nice, especially if nesting a 1D integrator.)

Or are you really hoping for an explicit formula? Whether that can be accomplished will depend on the example. In general it won't be possible.

@Mac Dude Yes, UseHardwareFloats=true will cause some computations to be done faster. But preventing many instances of software float computation will break far more computations.

The UseHardwareFloats environment variable was introduced in Maple 6, and for several major releases almost the only effect it had was on whether datatype=float acted like datatype=sfloat or datatype=float[8] for rtables.

A few years later the scalar HFloat was devised, and a few people sought out ways to make fast scalar floating-point computations easier to accomplish. (There will always be some people whose wish for a silver bullet defies cold logic. HFloats are not immediate and still need memory management, and do not bring the same degree of performance benefits as evalhf, let alone the Compiler.) The option hfloat for procedures arose around the same time, and allowed more flexibility than evalhf even if not as much performance benefit.) Then UseHardwareFloats was used to also control default HFloat creation upon extraction of scalars from float[8] rtables, and in modern Maple it can plot a role similar to option hfloat, but at the top-level. Alas, UseHardwareFloats documentation is thin.

The reason I'm describing some history is that it's important to realize that a very large portion of the Maple Library (many 100s of thousands of lines of code) was designed and existed for decades under the scheme that increasing Digits would normally allow more accurate floating-point computation. This aspect is still relied upon in many places in the Library.

But if you set UseHardwareFloats to true then in modern Maple that will strictly prevent higher software precision computation and thus also more accurate results from being attained in quite a few routines, some of them key.

And there are additional, important nuances, aside from just high working precision. The hardware float environment has restricted range (roughly +-10^308 down to about +-10^(-308), as I reckon you know). But with software floats much larger or smaller exponents can be used, even with default Digits=10. There are Library commands which rely on that in order to function as designed.

Consider the expression exp(750.1 - x) where x is in the range 760 .. 765. This produces values which are not implausible for an underlying physical setting or model. But if one happens to expand that symbolically, then under forced hardware floating-point the result becomes Float(infinity)/exp(x) which will bring no joy for x in the stated range. So, here, with UseHardwareFloats=true a reasonable problem has suddenly become intractable and requires considerably more care and effort to handle. Here are a few examples, but note that many more problematic cases can arise. Float computations can be problematic under all settings, but this hardware float setting introduces a lot of issues which Maple's software floating-point arena handles nicely.

You wrote, "If it isn't working or useful, then why is the option even there?"  Now, I most certainly did not say or imply that UseHardwareFloats=true "isn't working or useful! I don't know how you managed to make that non sequitur. My opinion that UseHardwareFloats=true is not a good top-level, initialized setting does not at all imply that it is never useful.

Just as you can set option hfloat on a procedure of your own devising, you can also set UseHardwareFloats=true. Within a procedure, or for a limited kind of top-level calculation (pure float linear-algebra, say) it can indeed work and be useful. As Joe, mentioned, as an environment variable its value is inherited by child procedure calls, but setting it does not affect the parent.

So it can be useful, in a targeted, specific computational subprocess like a custom procedure for some task you might have. But setting it at the top-level, as a blanket setting, is going to break stuff.

When a procedure is generated by Localize it affects numeric output from the original Sol returned by dsolve.

If the procedure generated by Localize is ever used to change/set its own parameter then it affects Sol in different ways, according to whether the new parameter value was previously utilized in a call to Localize or not.

A call to query the current parameters, made to the result from Localize, contains the global rather than local names.

In other words, idiosyncratic (but not outright wrong or unrxplainable) behaviour follows if the Localize results are ever used to change their parameter value, or if the original solution from dsolve is utilized. And the remember table of Localize needs clearing if memory is to be recovered, even following unassignment of the Localization procedures and gc. All in all, I think that this not manageable by the common man. And the current behavior might change in the future.

foobar.mw

@bliengme You used = instead of := and so did not actually assign the result from solve.

Are you really using Maple version 16, or is it perhaps version 2016?

@Carl Love I'd like to address some of the points in your last Comment.

[edit: I now suspect that the in the paragraph below I may have been mistakenly responding to statements by Carl about code by Preben, and not by me. Upon re-reading, I now suspect that by "your code" Carl did not mean something that I wrote. But I'll let the paragraph stand...]
Yes, the whole point of my post what that, using a naive/obvious approach with the procedure returned by dsolve (numeric), one should never call plot3d like plot3d(Y(A,X), X= -2..2, A= -2..2, ...) where A is the parameter and X the independent variable (of the ODE). So my approach never "gets that wrong" since I am advocating never doing it. If one insists on doing it that way then it's not my approach. (I know that, by "your code" you are referring to the first argument passed to plot3d, the procedure. But I feel that the wording was imprecise enough that I can justify the preceding clarification for the sake of other readers.)

[edit: The rest of this Comment I'll leave, as I was planning on writing it anyway.]

When I wrote this post I was aware that with effort I could get a speedup with use of either remember tables or the instantiation being discussed. (My earlier response to you was about what I was interpreting as a particular way that I thought you wanted to get your hands on that. I now realize that I may have given the wrong impression. However, I did not ever consider the sscanf@sprintf approach, which is very ingenious.)

And, yes, I have been interpreting "instantiation" as used in Comments above to mean it in the particular way that you last described it, where one (if not the) key feature is that the instantiated proceduce will not be affected by changes to the parent (or spawning off other instantiation). The instantiation is done so as to obtain a solving procedure for a given value of the parameter, where that procedure persists independently.

But now I would like to state my position about the approaches of such instantiation, or remember table use. I consider them ingenious but inferior to the two main methods I've given which lay down the solution one parameter value at per slice. I'll list off three considerations that I consider important about such approaches:

  1. They are much more complicated, to lay down in code and for most people to understand.
  2. They create allocations which would eventually have to be cleaned up, to avoid the effect of a memory leak. While not terrible in itself, that is one more aspect of complication and possible burden during use.
  3. There can be integration schemes for the IVP solver where, for a given parameter, it is possible to lay down the solution very efficiently (as independent variable/time t increases). That might not always be leveraged by use of plot3d, but I could be beneficial in use by plots:-odeplot (for 3D plot or 2D animation).

Over the years my viewpoint has shifted somewhat, against implementations which require memory management (either manual/custom, or by stock kernel functionality). I find that I now have a marked preference for structural efficiency, when other considerations are mostly the same. That's just a general comment about point 2) above.

So, it is my belief that use of the way that plot3d runs over the GRID points is important and germane.

I completely understand, if anyone wishes to disagree.

What I would really like to see, in the future, is for the plots:-odeplot command to get new and additional functionality that provided high efficiency for constructing 2D animations or 3D plots where the parameters of an IVP system were utilized for one of the independently changing values. I think that numeric ODE solving is important and common enough to warrant the effort.

I recall discussion about improving DEtools[DEplot] and/or plots:-odeplot wtih respect to using IVP parameters way back in 2011. There was also side talk about remember table use there and -- even if not as beneficial or sophisticated as your and Preben's comments above -- it's worth noting that it illustrated that manual management of memoization techniques can be a struggle for even strong non-expert Maple users. I think that any sophisticated approach to this whole issue, whether structural or memory oriented, is better positioned within a convenient interface of a stock Library command for the majority of users.

 

@Adam Ledger I don't think that you ard going to be able to get that to work satisfactorily. As mentioned, Maple does not ship with debug versions of binaries (executables and shared/dynamic libraries).

@Carl Love I don't know an clean and easy way to localize the dsolve solution procedures, when instantiated at particular parameter values.

Preben, perhaps I ought to have phrased it more like a description of the behaviour seen. That is to say, even when the parameter value is not being changed the mere act of setting it seems to degrade the performance of the mixed set-parameter & compute at specific variable-point. For all that I know, that aspect might be a bug. But it would still be natural and expected to have to generate the output values by walking the variable outermost and the parameter innermost.

Here is the shortest code I have to obtain the two possible choices for the parameter & variable, as first or second independent axis. It's not something I consider easy to find.

VofU0check := proc(par,var)
  if eval(U0,convert(WV('parameters'),`global`)) <> par then
    WV('parameters'=['U0'=par]);
  end if;
  WV(var);
end proc:

plot3d([(x,y)->y,(x,y)->x,VofU0check], U0lo..U0hi, tlo..thi,
       labels=["t","U0","V(t)"]);

plot3d(VofU0check, U0lo..U0hi, tlo..thi,
       labels=["U0","t","V(t)"]);

I am not aware of a way to call plots:-odeplot to force the much better performance here. If someone could show a way then I'd be delighted. But I suspect that plots:-odeplot has not been taught, yet, how to work best with these issues of instantiating parameters.

Upload an example Worksheet/Document that illustrates the problem.

Your description is far too vague. An actual worksheet would be better.

@tomleslie That's why I asked whether the initial conditions are right. I figured he didn't want the zero-solution.

I get some nice plots of V(t),  (3D, against U0 and t) with a few tweaks to the intial conditions (eg. derivatives at zero being small and positive, etc). That's with numeric solving.

But the OP seems to be expecting an explicit result for V(t) "in terms of U0". Who knows what else he'd do with such a monster (given slightly different initial conditions and a non-trivial solution, of course), other than plot it or compute it at numeric points. Could be tricky to find exactly, with parameters, float coefficients, etc.

Do you mean that you want to plot V(t) in terms of both U0 and t, as a 3D plot? If not, and you somehow want a 2D plot of V(t) in terms of U0, then what would be the value for t?

What is the value of A, which appears in your equations?

Are you 100% sure that the initial conditions are correct? Would you accept, say, D(r1)(0)=10^(-6) or greater?

@Kitonum aha. I was paying attention to the posted C code, and you at the title. It's interesting that his C code doesn't do what the title asks, for all negative real r.

@Kitonum Is that going to act the same as the C cast, for negative r?

As a side note, you should stop constructing expressions that involve both a symbol such as RK as well as the indexed name RK[1].  (Or K alongside K[1], say) It will lead to problems, down the road.

The OP uses a table Rr with a custom indexing function `index/xxx`. The only purpose of this seems to be to allow the typeset 2D math to have (subscripted) indexed name reference rather than function calls to RrProc.

That un-memoized use of the custom-indexing function inhibits performance considerably, which is magnified by having the references be done inside a six-fold nested add.

Let's assume that the procedure RrProc is to be kept. (It is inefficient. In the Comment/Reply above, I replaced it with a simple formula, for even better performance. But let's imagine that it really is needed, and that there is no possible improvement to the add-indexing i=m+1..II since RrProc just returns zero when i<=m.)

I use the randomize command to produce the same random data for each worksheet. Sorting the polynomial result shows that the results are all effectively the same. It's just performance which differs. I also bumped up the parameters II=8, JJ=8, and M=6 to make the example take longer.

I passed the PI-bar to UP1 instead of PI. I don't know what the OP intends. Otherwise the PI-bar is unused in his worksheet. The results differ from using PI, but the performance concerns are basically the same.

The best performance here entails:
- Use a non-custom-indexing table when calling UP1, which is created up front (outside all the add calls) from the custom-indexed table. This is important.
- Pass that plain table to the procedure UP1 which does the main job.
- Use Grid:-Set to tell the kernel to pass the plain table along, regardless.
This is the first attachment below.

Almost as good is too do as above, but without passing the plain table to the procedure UP1.

Also not bad, but not as good as the last, is to put option remember on the procedure RrProc which is called by the table custom-indexing-function.

Worst of all is to use the custom-indexing function table, without option rememeber on RrProc. That is like the OP's original.

The best improvement is from the original seq use and 21sec to the best variant with Grid:-Seq of 1.3sec.

soal-1_no_customindex_pass.mw

soal-1_no_customindex_no_pass.mw

soal-1_customindex_pass_remember.mw

soal-1_customindex_no_pass_remember.mw

soal-1_customindex_pass.mw

soal-1_customindex_no_pass.mw

Further editing two of the add ranges to be i=m+1..II and k=m+1..II brings the best performance here down to 0.4sec. That's a 50-times speedup over the original, even while retaining the inefficient RrProc.

First 234 235 236 237 238 239 240 Last Page 236 of 592