18245 Reputation

29 Badges

14 years, 243 days

Social Networks and Content at

MaplePrimes Activity

These are replies submitted by acer

Could I check whether the following makes sense to you?



And have you read the Help page for Topic ditto ?

(One of points is: are you aware that using % causes an evaluation?)

@Scot Gould Both int(...,numeric) and int(...,x=floatrange) are themselves going to call evalf(Int(...)) but with extra overhead to do the dispatch.

BTW, if your timing comparison of 10000 repeats didn't utilize forget between calls (or construct distinct examples) then memoization is likely skewing your results. There are remember tables under 'evalf/int` etc. So, in your comparison the extra overhead of those two int approaches might be (unintentionally, artificially) relatively high.

@Carl Love Actually, the errant line in the image is,


where G gets assigned GT and not the generated plot.

But, sure, DrawGraph and a resulting plot are out of place there, regardless. He likely wants G from Graph(M1).


@emendes Amusingly, the simplistic approach of doing a table lookup on all entries seems to perform relatively very well. That was the second approach in my Answer. But it doesn't account for the cost of programmatically constructing the lookup tables. 

If the programmatic construction of the lookup tables -- from details/provisions which you have not yet shared with us -- could be done quickly then that might a real contender.

It's also possible that the lookup details might not need to be explicitly put into new tables -- they might be present in some other form elsewhere in your parent code, which we have not yet seen. That could affect the performance of any solution to this subtask.

There is a theme on Mapleprimes, that goes like, "Thanks for the answer, but my actual problem is as follows..."

@Carl Love If the subsindets typecheck is specindex(A) then is there really a need to utilize op(0,a) in the replacement procedure? Wouldn't just literal A work, and be a little leaner?

Also, if everything within input list varA is of type specindex(A) then the outer procedure T could be removed, since it always applies. (The end result of such changes would then be almost identical to the version I'd given in my Answer, fwiw.) I realize that you, Carl, are fully aware of all this -- I state it for the benefit of the OP, because I'd mentioned that some efficiencies depend on aspects such as whether the input list is known to have all entries be indexed references to the same basename, etc. I don't know how much overhead he'll need to pare off.

@emendes I see. I was not thinking that "thousands" is a very large amount. And there is the cost of building T1 and T2 programmatically for every case, the provisions of which are not yet shown here.

Who knows, your choices might include a middle ground of more easily understood code that scales modestly well.

You might find it worthwhile to benchmark and compare your choices (although ideally the cost of construction of T1,T2 should be included in that, eventually).

It is even possible that -- especially given foreknowledge that the structure of the lists is of a special kind, ie. all entries are a known name(s), indexed by integers -- a simpler approach performs relatively well.

Fyi (though comparison using a single small example is fraught with difficulties, hence the repeats) some simpler approaches may perform comparitively well. For example the two versions I've shown above seem to compare well against the first two that Carl's Answer shows. But note that they are all pretty fast, and all exclude construction code for the lookup tables. It's getting tricky since the cost of determining whether a table lookup should be done is competing with cost of the simplistic approach of merely doing all as table lookups. It's possible that there are other subtle tweaks possible, eg. something faster than using `assigned`, etc.

@Kitonum This does not solve the problem in Maple 2017. It does take considerably longer time to compute, though.


@tomleslie Sorry, but I'm afraid that you are mistaken. The nicer edge in your plot is due to the adaptmesh functionality new in Maple 2020 (in which your worksheet was saved).

The OP's Maple 2017 would not get that same nice edge (tight to height zero at the midpoint) using your grid=[100,100] suggestion. So it does not solve the problem.

Even with grid=[200,200] the surface edge is right up along the domain boundary but is still quite far from height zero at the midpoint. And it takes considerably longer to compute. Increasing the grid option alone is not a practical solution to this problem.

As a minor point (because the scope of the assuming was discussed), one has a choice of several one-liners, including:

(combine@simplify@[solve])(1/2*m*v^2=1/2*F*x,v) assuming positive;
combine(simplify([solve(1/2*m*v^2=1/2*F*x,v)])) assuming positive;

@emendes I was already adding it, immediately after submission. (Sorry, my internet connection is very flaky these days.)

I see that Carl has given the same one-liner (the minor difference being that I also extracted the two list results so as to assign them to separate names, while Carl left it as a list of the two results).

Are you willing to accept as "solutions" some constraining relationships between the k__i, or are you are only interested in solutions in which the k__i are all free?

For example, what if there were additional constraints,
   {k__2 = k__3 + k__6 - k__7,
    k__4 = k__1 - k__3 + k__7,
    k__5 = - k__1 + 2*k__3 + k__6 - k__7}
Would that be acceptable, or does your problem demand that the k__i may be arbitrary?

@boe In the original version the mapped operator (procedure) has SF declared as a local. That is done automatically by Maple 2019, because of the detection of code to assign to that name.

The intention of the source code is to assign into the global SF, at that last stage. However if the mapped procedure declare SF as a local name then the assignments won't do the desired assignments into the global SF, and the code won't work as intended.

There are several ways to force the assignments done within the mapped procedure to be made using the global SF name rather than any local SF name. Here below are the ones suggested so far.

In the original source the mapped procedure is an arrow operator, and used anonymously. I've named it P below, to try and make it more clear that what matters is how various versions of procedure P happen to access names SF.

restart; # This first version is like the original.
SF := table([]):  # empty table
P := proc(x) assign(SF[x], foo); end proc:
Warning, `SF` is implicitly declared local to procedure `P`
map(P, [g]):
eval(SF);  # still empty, oops

restart; # This is like Preben's.
SF := table([]):  # empty table
P := proc(x) global SF; assign(SF[x], foo); end proc:
map(P, [g]):
eval(SF);  # hoorah
                        TABLE([g = foo])

restart; # This is like Carl's first.
SF := table([]):  # empty table
P := proc(x) assign(:-SF[x], foo); end proc:
map(P, [g]):
eval(SF);  # hoorah
                        TABLE([g = foo])

restart; # This is like mine.
SF := table([]):  # empty table
P := proc(x) assign('SF'[x], foo); end proc:
map(P, [g]):
eval(SF);  # hoorah
                        TABLE([g = foo])

I don't really think that mine is best, but it's also not outright wrong. The fagility depends on what goes on in the session before the read of the source. Usually I'd read source once and store to a .mla archive, rather than read each session -- that's my preference, so I don't have to worry about it -- the session is clean prior to the read. In that scenario my answer is just fine. If you want that final mapped assignment to be more robust then Carl's second suggestion ':-SF' is good, as is the combination of mine and Preben's. But since there are other things in the source that are also fragile (wrt to prior work in the session) then I think it's relatively moot. Just make the read be first in the session that uses the package, or store it to an archive.

@weidade37211 Yes, the essential idea is to have a rational numeric exponent rather than a floating-point exponent.

It might be a hard call, to discern whether the Statistics routine ought to emit that float exponent in the first place. After all, it is doing what it was told, when the distribution is requested with an explicit float as the second parameter. As I showed in my answer, one alternative is to pass an exact rational as that parameter of the distribution and still do (successful) purely numeric integration.

As for the numeric integration routines, they are able to recognize and handle the singular end-point in the case of the exact rational exponent. But for the floating-point exponent it does not to try a symbolic analysis, which could blow up in general because rational conversion could result in huge numerator and denominator. Then, when it detects the singularity, it doesn't handle it.

See attached, and raise infolevel for even more details. Note: in the case of Digits<=15 it computes in evalhf mode by the _d01ajc NAG integrator, which has its own purely numeric mechanisms to handle the discontinuity. Your original used Digits:=20 . The attachment shows both.


You are welcome.

I just added yet another way to animate it in 3D, in my previous comment.

@Anthrazit Yes, I see now that it happens even without restart. I usually start my worksheets with an explicit restart, so I hadn't noticed.

I do not like this behavior and I think that it is wrong, since it is "on" by default but there is no easy way to prevent it.

The opposite behavior ("off" by default) would be easy to alter by simply adding restart at the top of the worksheet.

If this behavior is documented on the Help page for Startup Code then I've missed it. Does anyone else see it?

4 5 6 7 8 9 10 Last Page 6 of 414