## 569 Reputation

12 years, 178 days

## @Thomas Richard just a comment that sel...

just a comment that selecting (highlighting) everthing is oftern (for me) not a usable option. If one has multiple plots (in particular density plots) with many data points, selecting even a single plot can take minutes ore even completely stall maple.

## @acer thanks!Well.. the more accurate th...

@acer

thanks!

Well.. the more accurate the better, but maybe results to a few (say 5?) decimal places could be reasonable - I would have to double check what the effect on the final answer would be.

As I mentioned, a part of the "extra" slowdown is that fsolve is not thread safe, so i have to resort to using Grid:-Map, and hence can't compile the rest of the code... but right now there is probably no way around that.

## @Mac Dude thanks for your post. I've pl...

thanks for your post. I've played with setting intevals/initial values before, but have not thought of using the "previous" value.. that does seem to help, and cut the time by half or more in small n examples.. but when n gets large (and the solution is close to pi/2) I get a similar timing with the version i posted....

here is the code that uses the last result as a guess for the next one:

get_theta_n_array3:=proc(max_n::integer, omega_a::float, v::float, x_l::float, C_a::float, Z_0::float)
local theta_n_array, i:

theta_n_array:=Array(1..max_n,datatype=float[8]):

theta_n_array[1]:=fsolve(subs(n=1, tan(theta_n) - C_a*Z_0*(-v^2*(Pi*n-theta_n)^2/x_l^2+omega_a^2)*x_l/(v*(Pi*n-theta_n)) = 0),theta_n=-Pi/2..Pi/2, real):

for i from 2 to max_n do
theta_n_array[i]:=fsolve(tan(theta_n) - C_a*Z_0*(-v^2*(Pi*i-theta_n)^2/x_l^2+omega_a^2)*x_l/(v*(Pi*i-theta_n)) = 0,theta_n=theta_n_array[i-1], real):
end:

if ArrayNumElems(theta_n_array) <> max_n then
printf("Bad Array Dimensions! Got too many or not enough solutions.");
theta_n_array:="CHECK: get_theta_n_array()": #dirrrrty hack that will ring an alarm bell if array is not the right size
end;
theta_n_array;
end;

wonder if anyone else has any ideas.

thanks again.

## heap size...

The first thing i do every year when i install maple is manually change the java heap size in the maple startup file.. so say for me it's:

/usr/local/maple18/bin/maple

I change this:

JAVAHEAP=512

to say this:

JAVAHEAP=3072

Note, that this file is called from xmaple, so it's the only thing that needs changing.

Also, I'm sure there are other, cleaner ways to do this... for example this info can be passed to the script as an argument. Furthermore, I don't know whether doing it directly through maple interface has the same effect, nor whether your problem is even related to heap size...  One would have to do more testing, and right now I have little time.

I basically have a few worksheets with multiple density plots that are directly affected by this settings - they are only usable with this change.

## @Carl Love that's very useful to know!...

that's very useful to know!

## @casperyc just to be clear - if you run...

just to be clear - if you run these commands from within a GUI worksheet, you will be able to see all the standard 2d math (as you would if you were exectuing the commands directly wihtout the "read" command).

far from optimal i'm sure, but you can probably get what you need.

## @zippo do you have a simple call to a p...

do you have a simple call to a plot function (as in plot(...) ) that when exported shows this problem?

i looked at my old code in more detail - i think i was first expoting in postrcit (so eps files) directly from maple, and only then one could change the properties (all the info outisie of the "bounding box" is lost when one exports directly to a non-vector format such as jpeg or bmp). I can find the "utlity" functions that i wrote, but can't find any code that actually called them... and trying a few simple calls to plot() or say densityplot() seem to export fine now (i'm using 18.01).

if you can provide a simple way to reproduce this, maybe i can see if these commands i showed above, do help with th e problem.

## same problem...

note this was a while ago, but i'm pretty sure i had the same problem...

through some trial and error, here is a call i would make after exporting the plot (through a function similar to what i posted)

system(sprintf("sed 's/^%%%%BoundingBox.*/%%%%BoundingBox:  -500 -500 2000 2000/g' /tmp/%s.eps > /tmp/%s_box.eps  ; ps2pdf14 -dFIXEDMEDIA -dDEVICEWIDTHPOINTS=2400 -dDEVICEHEIGHTPOINTS=2400 -dOptimize=true  -dEPSCrop -sOutputFile=/tmp/%s_box.pdf  /tmp/%s_box.eps; pdfcrop  --margins '5 20 5 20' -clip /tmp/%s_box.pdf %s/%s.pdf; sleep 2; convert %s/%s.pdf %s/%s.png" , v_fileName, v_fileName, v_fileName, v_fileName, v_fileName, v_fileDir, v_fileName, v_fileDir, v_fileName, v_fileDir, v_fileName));

this fixed the issue for me. It might (or might not!) be helpful to you... it should be easy to divide up the commands to see all the things i'm calling in a row. it's *really painful* as you see, but it should't take long to adapt this to your code... of course the last call to convert could be to jpeg or bmp instead of png if that's what you need.

## can edit by hand...

@zippo

If you're exporting to postscript (which you seem not to be, but possibly if you could do that), then you can edit those files by hand (they are usually plain text). There are fields in there (Look for something like BoundingBox) which control the sizes of all elements, in particular the whitespace around the actual plot - just change those to something that works.

If you're on linux, then there is often an easy way to fix some of the plots in a more "automagical" way. You can export an eps file from maple, and then run it through a program called ps2pdf14 like this:

ps2pdf14 -dEPSCrop -dPDFSETTINGS=/prepress  myfile.eps

that will often fix the issues (although you'll end up with a pdf file, which may or may not be good). Note, I haven't done this in a while, so your millage may vary.

As I said elsewhere in this thread, another external tool might be better suited for this.

## @Alejandro Jakubi  yes... academic ...

yes... academic contacts must be a large part of this... it would be a fascinating statistic to see the usage/sales numbers for the big three Ms and see how they've changed over the years... but one can only speculate.

The nice part of all of this is that the open source alternatives have (especially recently) been gaining ground and maturing. In some areas (arguably) such as say linear algebra or plotting, they are already on par or more feature complete than the commercial offerings.

## try different ones...

The answer may be highly personal and will surely depend on what kind of stuff you're interested in doing. The best way to make this decision might be to try them for yourself!... Pick a project that corresponds to something you "usually" work on, and redo it in all the various software packages you might be interested in (don't forget open source collaborations like SAGE - which have come a long way in the last few years, and hopefully, as far as I'm concerened at least, are the future of mathematical computing). This way you can see for yourself what aspects of a given language/environment you like and which ones you don't.

As a side note, Wolfram's (Mathematica's creator) marketing machine is really unstoppable. I am not quite sure how they do it, by in my community, every time mathematical software comes up, all I hear is how unquestionably Mathematica is "the best" in almost everything it does... this even sometimes comes from people who have never used any other software, and even their Mathematica experience consists of a say one or two simple projects they did as undergrads. So at least as far as marketing goes, Mathematica probably is "the best" ;)

## debugger...

you're describing a debugger. Look into that in maple's help.

## thanks...

>>Generically, the best approach is delaying the introduction of floats to the end.

... yes, i've done variations of that approach initially, with no success.

I did not know about the "Veil" mechanism - seems very useful. Unfortunately here things don't work and seem to confuse maple. I think the _U1s are leftovers of the integration that are not properly handled by the veiling. Setting these to say zero (which is clearly wrong if they are really a dummy integration variables!) gives nonsense answer.

I've tried variations of this (for example not explicitly introducing the drive until after doing the inverse), but also get incorrect results.

I also tried reformulating the problem as simply a convolution integral and "manually" integrating with the 'int' command, but that does not work either.