acer

33188 Reputation

29 Badges

20 years, 204 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I have branched this off as a separate Post. It was originally an Answer to a Question whose author had also asked a duplicate. The source is a modification of from this earlier post. Another related Question did not yet have a satisfying answer (in the efficiency sense, which was mentioned by the submitter).

Patrick T. has started us off. I think that computing directly into a float[8] Array, and then writing that out directly using ImageTools, should be a way to get sharply detailed results quickly. The crucial parts might be Compiled, or possibly Threaded with evalhf, and act inplace on a given Array. But until someone codes that up, here is a modification of Patrick's tweak.

This runs in Maple 15. It also ran in Maple 12, but without the Threads:-Sleep call, whose purpose is only to incur a delay while the plot file driver writes out the first, large image file. In Maple 12, one would have to manually run it, and wait a little. On a fast i7 in Windows 7, 10 seconds seems long enough for the 4096x4096 jpeg.

Bifurcation := proc(initialpoint,xexpr,ra,rb,acc)
  local p1,hr,A,L1,i,j,phi:
  global r,L2:
  hr := unapply(xexpr,x);
  A := Vector(600):
  L1 := Vector(acc*500):
  for j from 1 to acc+1 do
    r := (ra + (j-1)*(rb-ra)/acc):
    A[1] := hr(initialpoint):
    for i from 2 to 500 do
      A[i] := evalf(hr(A[i-1])):
    end do:
    for i from 1 to 400 do
      L1[i+400*(j-1)] := [r,A[i+100]]:
    end do:
  end do:
  L2 := {seq(L1[i], i = 1..acc*400)}:
  p1 := plots:-pointplot(L2, 'symbol' = solidcircle, 'symbolsize' = 2, 'color' = blue):
  unassign('r'):
  return(p1):
end proc:

P1 := Bifurcation(1/2,r*x*(1-x),2.5,4,250):

P:=plots:-display(P1, 'axes' = box, 'labels' = [r, x] ):

# A very large image is needed, to get symbolsize=2 to be seen.
plotsetup(jpeg,plotoptions="height=4096,width=4096",
          plotoutput=cat(kernelopts(homedir),"/bifu4096.jpg")):

plots:-display(P);

Threads:-Sleep(10): # or be patient, and wait

image:=ImageTools:-Read(cat(kernelopts(homedir),"/bifu4096.jpg"),
                        format=JPEG):

ImageTools:-Write(cat(kernelopts(homedir),"/bifu640.jpg"),
                  ImageTools:-Scale(image,1/6.4),format=JPEG):

I'm not altogether content with the above code. I didn't change the methodology. (It builds L2 as a listlist for `pointplot` instead of a float[8] Array for `plot` with style=point. It doesn't use evalhf or any other acceleration. I'm a little picky about globals, in the sense that I prefer code without them, when possible.) It's a good start, to build from, and the topic is interesting.

A few other comments are in order here. I've noticed that very large exported image files are needed for small symbolsize, in order to export something which is non-empty. In fact, the same thing happens in the Standard GUI itself -- if one makes the symbolsize too small, then the plot is shown as non-empty only if the plot window has been (manually) resized as quite large. Is this really useful, helpful, and best?

acer

ps. My firefox is only showing me the Mapleprimes post editor in raw html form, for a day or so. So I don't have the nice editor, and cannot upload files. (Am I the only one with this issue?) If one person who doesn't have this issue would care to produce and upload the final 640x640 image using the above code, that might be nice, thanks.
You don't seem to have shown us exactly how you do want it formatted.

Also, you seem to want x/3 to come out like x/3. or x/.3e1 while at the same time not wanting z/10 to come out as z/.1e2 which I find hard to fathom. Would you accept .33*x and .1*z, or want more precision and if so how much? Or, if you want x/3 and z/10 to be treated differently then how so? How many nonzero trailing decimals can there be in z/K converted to KFLOATINV*z before you'd prefer it as z/KFLOAT?

acer

@herclau 

What Robert has done is construct a transformation procedure, using plottools:-transform, and then apply it to a plot structure. It is procedure application, not multiplication.

Done in steps, it might look like this,

f := x^2-1:
g := -x-1:

origplot:=plot(f-g, x=-1.5 .. 1.5,
               filled=true, color=COLOUR(RGB,.8,.8,.9)):
origplot;
H := unapply([x,y+g],x,y);

transformer := plottools:-transform(H):

transformer(origplot);
plots:-display(
  plot([f,g], x=-1.5 .. 1.5, color=black),
  %
               );

@herclau 

What Robert has done is construct a transformation procedure, using plottools:-transform, and then apply it to a plot structure. It is procedure application, not multiplication.

Done in steps, it might look like this,

f := x^2-1:
g := -x-1:

origplot:=plot(f-g, x=-1.5 .. 1.5,
               filled=true, color=COLOUR(RGB,.8,.8,.9)):
origplot;
H := unapply([x,y+g],x,y);

transformer := plottools:-transform(H):

transformer(origplot);
plots:-display(
  plot([f,g], x=-1.5 .. 1.5, color=black),
  %
               );
Fantastic, thanks.

acer

Fantastic, thanks.

acer

@thwle Robert's code uses a syntax designed for the command plots:-arrow

There is another command, plottools:-arrow, and when you load `plottools` after loading `plots` (using `with`) then you are clobbering the previous binding of `arrow`.

You can force it to use the intended `arrow`, by explicitly using its so-called long-form,

with(plots): with(plottools):

animate(plots:-arrow,[[0,0],[0,cos(t)]],t=0..4*Pi,view=[-1..1,-1..1]);

@thwle Robert's code uses a syntax designed for the command plots:-arrow

There is another command, plottools:-arrow, and when you load `plottools` after loading `plots` (using `with`) then you are clobbering the previous binding of `arrow`.

You can force it to use the intended `arrow`, by explicitly using its so-called long-form,

with(plots): with(plottools):

animate(plots:-arrow,[[0,0],[0,cos(t)]],t=0..4*Pi,view=[-1..1,-1..1]);

@Robert Israel I noticed that a single plot, as the kludge insertion, doesn't include the text portion which shows the current value of the animation paraneter. Ie, a textplot with "z=...." in the plot region.

If you can discern that absence, for the kludged insertion, then you might be able to instead make the kludge be a 1- (or unchanging 2-frame) animation that also computes and inserts quickly.

@Robert Israel I noticed that a single plot, as the kludge insertion, doesn't include the text portion which shows the current value of the animation paraneter. Ie, a textplot with "z=...." in the plot region.

If you can discern that absence, for the kludged insertion, then you might be able to instead make the kludge be a 1- (or unchanging 2-frame) animation that also computes and inserts quickly.

Could you post the complete code?

acer

@Petra Heijnen The startup code region was only introduced in Maple 12 (2008).

@Petra Heijnen The startup code region was only introduced in Maple 12 (2008).

1) My advice in Maple on this would be to keep the floats and the exact symbolics separate. Use exact symbolics alone, for whatever part of the task demands it. And if at any point you switch over to float-numerics, then try to switch over entirely for that subtask. Keep the mixed float & exact symbolics separate where possible, and if it seems like you have to then... re-think the code.

2) Jacques' comment isn't quite fair. It's not the same to say, "if you're doing numerics, then try to compile" and "you better compile". And Maple is behind Mathematica in areas like auto-compiling for plotting and numeric solving. Mathematica has been doing that to various degrees, invisibly and behind the scenes, since v.2.2 I think. Maple's only doing it broadly with the much less fast evalhf interpreter, and only specifically auto-compiling inside dsolve/numeric. Even in the case that the Maple user Compile's the function, there is still a signifcant portion of avoidable overhead when using that to plot, fsolve, Minimize, etc. (This deserves a blog, for plotting. And another, for external wrapper topics.)

3) As Jacques says, documentation is key here. But not just nitty-gritty details in existing help-pages. There's a big need for more take-a-step-back-what's-your-goal-big-picture documentation.

4) The revised Maple 15 Programming Manual is a bit more coordinated that its predecessors, with respect to start-to-finish programming. I mean, the chapter on debugging and profiling and (new, heavens!) testing code fits together more. It still needs to be at least four times as long, but that's natural. It says a lot, I think, that the long debugger section of that chapter is (without mentioning it much) based on the commandline interface. Which is good because the popup graphical debugger that launches from the Standard GUI is almost devoid of virtue. The primary reason that profiling and debugging are not commonplace is that programming itself has become a bit of a dirty word in the Maplesoft corporate mindset. This (IMHO) is a rather North American manner of marketing: to show that new item A is good it is necessary to behave as if any alternative B is bad. So "clickable math" gets marketed by supressing programming, even though programming is Maple's greatest strength.

5) I agree that remembering values requires understanding of the mechanisms, but disagree that it always requires expert knowledge. As member PatrickT suggested, if you remember to remember then don't forget to forget. If it were so important to not remember too much, then why (after decades) are there so few tools for finding and clearing such internal tables? Also, how much of Jacques' comment about greater slowdown with greater memory allocation is relevant more to Maple's stop-dead-to-mark-and-sweep style of memory manager?

6) Maple's own Library is mostly not thread-safe. So this restricts one to entirely user-defined + kernel builtins. The Task model is more convenient. But memory management needs work, for this to be in Maple's top-20.

7) I've always been fascinated that Mma makes so much of Reap and Sow, while in Maple it seems that you'd pretty much have to cobble together your own, and they might not be that useful to you. Jacques' take on this seems to relate to computational complexity of data-building, but Maple's top-10 should include this more generally. O(n^2) vs O(n) for list/set building, sure. But also for computation. This might be the #1 item.

Jacques writes, "..why do the designers make it so darned easy to write bad code?" One partial answer is that it is misplaced subservience to users' "wishes for convenience". People sometimes want something easier than `seq`, and are given `$` whose difference they often don't understand and get into difficulty. The same for `||` vs `cat`. Even `unapply` is merely a powerful convenience, which gets abused when it is treated like candy. That's why code like this gets written, and used in some deeply nested loop,

    proc(x,y) unapply(F(x,y),[a,b]); end proc

It's why Components have associated code sections without a well-considered evaluation model. It's why the triple-exclamation-mark button exists, so that people don't so easily get the performance benefits of procedures' evaluation model. It's why RunWorksheet and Retrieve exist.

Another partial answer is the misplaced notion that users cannot handle the truth, which is deemed too complicated or scary. This is why only `sum` (and not `Sum` or `add`) appears on the Expression palette. Why there is no good, honest, do-this-not-that programming practices guide. It's why help-pages don't use uneval quotes around unprotected names as optional parameter keywords. It's why help-pages use the form package[routine] instead of package:-routine even though ?colondash makes package[':-routime'] tragicomic.

In a Maple with only `sum` on the Expression palette people are far more likely to try things like,

CodeTools:-Usage( plot(sum( a^(3^i)-a^(2^i), i=1..infinity), a=0..0.1) );

instead of,

CodeTools:-Usage( plot(Sum( a^(3^i)-a^(2^i), i=1..infinity), a=0..0.1) );

8) See 7).

9) Jacques is being a little unfair to Mathematica here, I think. Mma's pattern matching is a strength: powerful and useful. It doesn't disparage that functionlity to say don't eat it like candy. The same is true of bits of Maple, eg. `unapply`. If the rule were more like, "don't do steps that you can't justify or don't understand at all" then people might do this less often,

    ... deep in some loop ...
    f := unapply(expr,x);
    G( f(x),... );  # where `f` is used nowhere else.

10) If this were Maple's top-10 list then it'd be easier to find at least ten good suggestions. For pure numerics, I'd also suggest try to act in-place on Matrices/Vectors/Arrays. Which brings me to, save on memory management time by producing less transient, collectible garbage, which is applicable to both numerics and symbolics. Time and time again those principles have brought me great savings when trying to tune code.

acer

@Markiyan Hirnyk There is potentially a big difference for performance between an algorithm and its implementation. Having a good (or even optimal) algorithm isn't enough to get great performance -- an efficient implementation is also needed.

As far as I know, the DirectSearch package v.1 or v.2 does not use external-calling or the Compiler in order to do fast evaluations of objective, constraints, or derivatives of same. i'm not even sure whether it even uses evalhf. It would be good for all, though, if I were totally wrong about this.

Perhaps a timing comparison is in order here. The Optimization package does these QP problems with hundreds of variables in a few seconds on a fast machine. Marcus gave one with, and one without, general constraints. I wonder how DirectSearch performs on them when passed its method=quadratic option.

Ideally, performance would be measured, say, both with and without the time and memory resources needed to set up the equations, as those are real and important costs (some of which can be avoided when using Optimization, as demonstrated).

Note that, as yet, Marcus has not answered posed questions about what size problems he is aiming for.

Apart from performance, Marcus D. also had another variation, with additional constraints on the variables. Perhaps DirectSearch provides an easier way to handle that variation? Mixed-integer QP makes it harder.

First 436 437 438 439 440 441 442 Last Page 438 of 607