acer

32358 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Fantastic, thanks.

acer

Fantastic, thanks.

acer

@thwle Robert's code uses a syntax designed for the command plots:-arrow

There is another command, plottools:-arrow, and when you load `plottools` after loading `plots` (using `with`) then you are clobbering the previous binding of `arrow`.

You can force it to use the intended `arrow`, by explicitly using its so-called long-form,

with(plots): with(plottools):

animate(plots:-arrow,[[0,0],[0,cos(t)]],t=0..4*Pi,view=[-1..1,-1..1]);

@thwle Robert's code uses a syntax designed for the command plots:-arrow

There is another command, plottools:-arrow, and when you load `plottools` after loading `plots` (using `with`) then you are clobbering the previous binding of `arrow`.

You can force it to use the intended `arrow`, by explicitly using its so-called long-form,

with(plots): with(plottools):

animate(plots:-arrow,[[0,0],[0,cos(t)]],t=0..4*Pi,view=[-1..1,-1..1]);

@Robert Israel I noticed that a single plot, as the kludge insertion, doesn't include the text portion which shows the current value of the animation paraneter. Ie, a textplot with "z=...." in the plot region.

If you can discern that absence, for the kludged insertion, then you might be able to instead make the kludge be a 1- (or unchanging 2-frame) animation that also computes and inserts quickly.

@Robert Israel I noticed that a single plot, as the kludge insertion, doesn't include the text portion which shows the current value of the animation paraneter. Ie, a textplot with "z=...." in the plot region.

If you can discern that absence, for the kludged insertion, then you might be able to instead make the kludge be a 1- (or unchanging 2-frame) animation that also computes and inserts quickly.

Could you post the complete code?

acer

@Petra Heijnen The startup code region was only introduced in Maple 12 (2008).

@Petra Heijnen The startup code region was only introduced in Maple 12 (2008).

1) My advice in Maple on this would be to keep the floats and the exact symbolics separate. Use exact symbolics alone, for whatever part of the task demands it. And if at any point you switch over to float-numerics, then try to switch over entirely for that subtask. Keep the mixed float & exact symbolics separate where possible, and if it seems like you have to then... re-think the code.

2) Jacques' comment isn't quite fair. It's not the same to say, "if you're doing numerics, then try to compile" and "you better compile". And Maple is behind Mathematica in areas like auto-compiling for plotting and numeric solving. Mathematica has been doing that to various degrees, invisibly and behind the scenes, since v.2.2 I think. Maple's only doing it broadly with the much less fast evalhf interpreter, and only specifically auto-compiling inside dsolve/numeric. Even in the case that the Maple user Compile's the function, there is still a signifcant portion of avoidable overhead when using that to plot, fsolve, Minimize, etc. (This deserves a blog, for plotting. And another, for external wrapper topics.)

3) As Jacques says, documentation is key here. But not just nitty-gritty details in existing help-pages. There's a big need for more take-a-step-back-what's-your-goal-big-picture documentation.

4) The revised Maple 15 Programming Manual is a bit more coordinated that its predecessors, with respect to start-to-finish programming. I mean, the chapter on debugging and profiling and (new, heavens!) testing code fits together more. It still needs to be at least four times as long, but that's natural. It says a lot, I think, that the long debugger section of that chapter is (without mentioning it much) based on the commandline interface. Which is good because the popup graphical debugger that launches from the Standard GUI is almost devoid of virtue. The primary reason that profiling and debugging are not commonplace is that programming itself has become a bit of a dirty word in the Maplesoft corporate mindset. This (IMHO) is a rather North American manner of marketing: to show that new item A is good it is necessary to behave as if any alternative B is bad. So "clickable math" gets marketed by supressing programming, even though programming is Maple's greatest strength.

5) I agree that remembering values requires understanding of the mechanisms, but disagree that it always requires expert knowledge. As member PatrickT suggested, if you remember to remember then don't forget to forget. If it were so important to not remember too much, then why (after decades) are there so few tools for finding and clearing such internal tables? Also, how much of Jacques' comment about greater slowdown with greater memory allocation is relevant more to Maple's stop-dead-to-mark-and-sweep style of memory manager?

6) Maple's own Library is mostly not thread-safe. So this restricts one to entirely user-defined + kernel builtins. The Task model is more convenient. But memory management needs work, for this to be in Maple's top-20.

7) I've always been fascinated that Mma makes so much of Reap and Sow, while in Maple it seems that you'd pretty much have to cobble together your own, and they might not be that useful to you. Jacques' take on this seems to relate to computational complexity of data-building, but Maple's top-10 should include this more generally. O(n^2) vs O(n) for list/set building, sure. But also for computation. This might be the #1 item.

Jacques writes, "..why do the designers make it so darned easy to write bad code?" One partial answer is that it is misplaced subservience to users' "wishes for convenience". People sometimes want something easier than `seq`, and are given `$` whose difference they often don't understand and get into difficulty. The same for `||` vs `cat`. Even `unapply` is merely a powerful convenience, which gets abused when it is treated like candy. That's why code like this gets written, and used in some deeply nested loop,

    proc(x,y) unapply(F(x,y),[a,b]); end proc

It's why Components have associated code sections without a well-considered evaluation model. It's why the triple-exclamation-mark button exists, so that people don't so easily get the performance benefits of procedures' evaluation model. It's why RunWorksheet and Retrieve exist.

Another partial answer is the misplaced notion that users cannot handle the truth, which is deemed too complicated or scary. This is why only `sum` (and not `Sum` or `add`) appears on the Expression palette. Why there is no good, honest, do-this-not-that programming practices guide. It's why help-pages don't use uneval quotes around unprotected names as optional parameter keywords. It's why help-pages use the form package[routine] instead of package:-routine even though ?colondash makes package[':-routime'] tragicomic.

In a Maple with only `sum` on the Expression palette people are far more likely to try things like,

CodeTools:-Usage( plot(sum( a^(3^i)-a^(2^i), i=1..infinity), a=0..0.1) );

instead of,

CodeTools:-Usage( plot(Sum( a^(3^i)-a^(2^i), i=1..infinity), a=0..0.1) );

8) See 7).

9) Jacques is being a little unfair to Mathematica here, I think. Mma's pattern matching is a strength: powerful and useful. It doesn't disparage that functionlity to say don't eat it like candy. The same is true of bits of Maple, eg. `unapply`. If the rule were more like, "don't do steps that you can't justify or don't understand at all" then people might do this less often,

    ... deep in some loop ...
    f := unapply(expr,x);
    G( f(x),... );  # where `f` is used nowhere else.

10) If this were Maple's top-10 list then it'd be easier to find at least ten good suggestions. For pure numerics, I'd also suggest try to act in-place on Matrices/Vectors/Arrays. Which brings me to, save on memory management time by producing less transient, collectible garbage, which is applicable to both numerics and symbolics. Time and time again those principles have brought me great savings when trying to tune code.

acer

@Markiyan Hirnyk There is potentially a big difference for performance between an algorithm and its implementation. Having a good (or even optimal) algorithm isn't enough to get great performance -- an efficient implementation is also needed.

As far as I know, the DirectSearch package v.1 or v.2 does not use external-calling or the Compiler in order to do fast evaluations of objective, constraints, or derivatives of same. i'm not even sure whether it even uses evalhf. It would be good for all, though, if I were totally wrong about this.

Perhaps a timing comparison is in order here. The Optimization package does these QP problems with hundreds of variables in a few seconds on a fast machine. Marcus gave one with, and one without, general constraints. I wonder how DirectSearch performs on them when passed its method=quadratic option.

Ideally, performance would be measured, say, both with and without the time and memory resources needed to set up the equations, as those are real and important costs (some of which can be avoided when using Optimization, as demonstrated).

Note that, as yet, Marcus has not answered posed questions about what size problems he is aiming for.

Apart from performance, Marcus D. also had another variation, with additional constraints on the variables. Perhaps DirectSearch provides an easier way to handle that variation? Mixed-integer QP makes it harder.

Technically speaking, what Thomas has shown is an rtable initializer, rather than an indexing function.

The initializer is used to populate the Matrix/Array/Vector at creation time, and is not saved or referenced by the object after that. An indexing function is a procedure that gets used, in an ongoing way, whenever entries are accessed (read and/or write).

ps. votes:=votes+1

acer

Technically speaking, what Thomas has shown is an rtable initializer, rather than an indexing function.

The initializer is used to populate the Matrix/Array/Vector at creation time, and is not saved or referenced by the object after that. An indexing function is a procedure that gets used, in an ongoing way, whenever entries are accessed (read and/or write).

ps. votes:=votes+1

acer

@Idealistic As mentioned, you can view the source of the routine `fsolve/sysnewton` using `showstat`. (Or you can use `print`.)

It tries at most 20 distinct initial points in Maple 13, and the number of iterations it allows itself (from each starting point) depends on Digits by some complicated formula. You can't really control the number of iterations it will attempt for a given starting point. Any effect you can have on the max iteration limit (by adjusting Digits) will also have other effects, like on the tolerances, which you may not want.

You can see the lines for both those aspects. In Maple 13.02, you can issue these two calls, to show the relevent source lines,

showstat(`fsolve/sysnewton`,20..21); # number of different starting points

showstat(`fsolve/sysnewton`,41); # max number of iterations, formula depends on Digits

As mentioned, this implementation is quite complicated. Mostly that's because it strives to not get caught up too much on problematic examples. You could write your own implementation in a handful of lines of maple code. The basic structure might look very much like the univariate implementation, especially if you use a shorthand notation like J^(-1) for the inverse of the Jacobian (instead of calling LinearSolve, say).

You might also look for simpler implementations on the Numerical Analysis pages of the Application Center.

As for the `infolevel` setting, well, that controls which `usefinfo` messages get displayed when the source code is run. There are some general guidelines on the userinfo help-page, although the various descriptions of the purposes of levels 1-6 are often not strictly adhered to.

acer

@Idealistic As mentioned, you can view the source of the routine `fsolve/sysnewton` using `showstat`. (Or you can use `print`.)

It tries at most 20 distinct initial points in Maple 13, and the number of iterations it allows itself (from each starting point) depends on Digits by some complicated formula. You can't really control the number of iterations it will attempt for a given starting point. Any effect you can have on the max iteration limit (by adjusting Digits) will also have other effects, like on the tolerances, which you may not want.

You can see the lines for both those aspects. In Maple 13.02, you can issue these two calls, to show the relevent source lines,

showstat(`fsolve/sysnewton`,20..21); # number of different starting points

showstat(`fsolve/sysnewton`,41); # max number of iterations, formula depends on Digits

As mentioned, this implementation is quite complicated. Mostly that's because it strives to not get caught up too much on problematic examples. You could write your own implementation in a handful of lines of maple code. The basic structure might look very much like the univariate implementation, especially if you use a shorthand notation like J^(-1) for the inverse of the Jacobian (instead of calling LinearSolve, say).

You might also look for simpler implementations on the Numerical Analysis pages of the Application Center.

As for the `infolevel` setting, well, that controls which `usefinfo` messages get displayed when the source code is run. There are some general guidelines on the userinfo help-page, although the various descriptions of the purposes of levels 1-6 are often not strictly adhered to.

acer

First 421 422 423 424 425 426 427 Last Page 423 of 592