Mac Dude

1571 Reputation

17 Badges

13 years, 106 days

MaplePrimes Activity


These are replies submitted by Mac Dude

Hmm, interesting construct. I would have tried "../Code_principal/" first, but don't know whether this actually works, even on Mac OS X.

Mac Dude.

Hmm, interesting construct. I would have tried "../Code_principal/" first, but don't know whether this actually works, even on Mac OS X.

Mac Dude.

@Carl Love Thanks for checking & testing against restart. I just submitted an SCR.

M.D.

Bad move. Apparently about 1/3 Mac users stick to 10.6.8. It also seems unwarrented as presumably the push to news OS comes in part form wanting to use things like OpenCL, which 10.6 does support.

Unlikely I will get to Maple 18 anytime soon. Unlikely my friends will. My 10.6 machines work well; no need to dump them or upgrade what works. And no need to get into the iOS-ification that started with 10.7.

Why??

Mac Dude.

 

@Axel

Hi, I really appreciate your taking the time to look at this in more detail.

I was on travel across the US the last 3 days & my attempt to give you a little more background failed (hotel-network crap-out). You are certainly right in pointing out the ill-conditionedness of the problem. In fact, by now I am quite sure I have to do a better job at nailing down a reasonable interval for the integration. In essence all the parameters except for Zo vary; i.e. 4. The numbers are a bit weird as the beams I am modellig are bunched, looking like steam-rolled cigars: long, flat and very narrow in the vertical (the Sig.. values in the sheet are the sizes & divergences). These plow through each other; the overlap gives a measure of the event rate in a detector. I use meter and radians as units whereas the dimensions are micrometer and micro-radian; that is where the very small numbers come from which then, by division, lead to the 10^10 like results. Maybe I have to rethink that (but I got fed-up tripping over unit conversions).

Your logarithmic approach may make it possible for me to get a measure of the range I really need to integrate over. I'll need to study your sheet a little more to make sure I fully understand it but I get the overall idea. In test cases I have had good experience with integration over +/- 0.01 in z (which makes sense given the length scale in z of the problem [a few mm]). In fact I was able to run 500 cycles of the full problem (much more involved than the test case I posted) in a reasonable 10 min or so. Ultimately I want to let it run for a couple of 1000 cycles (maybe 20 min of time of the real physical system I am modelling). Fortunately, with this running in a closed feedback loop a failing of the integrator becomes obvious fairly quickly (run-away) as I have seen.

I did uncover another Maple "feature": I managed to parallelize the integrations since there are 4 integrations/cycle in the real application (with related and similar but not identical functions) using Threads:-Task:-Start. In Maple 15, that actually works (with apparently correct results) & gains maybe a 50% speed-up for two integrals/cycle. In Maple 17, it works also, but after that the next "restart" hangs the Maple kernel. A sad, reproducible regression.

Thanks again,

M.D.

@Axel

Hi, I really appreciate your taking the time to look at this in more detail.

I was on travel across the US the last 3 days & my attempt to give you a little more background failed (hotel-network crap-out). You are certainly right in pointing out the ill-conditionedness of the problem. In fact, by now I am quite sure I have to do a better job at nailing down a reasonable interval for the integration. In essence all the parameters except for Zo vary; i.e. 4. The numbers are a bit weird as the beams I am modellig are bunched, looking like steam-rolled cigars: long, flat and very narrow in the vertical (the Sig.. values in the sheet are the sizes & divergences). These plow through each other; the overlap gives a measure of the event rate in a detector. I use meter and radians as units whereas the dimensions are micrometer and micro-radian; that is where the very small numbers come from which then, by division, lead to the 10^10 like results. Maybe I have to rethink that (but I got fed-up tripping over unit conversions).

Your logarithmic approach may make it possible for me to get a measure of the range I really need to integrate over. I'll need to study your sheet a little more to make sure I fully understand it but I get the overall idea. In test cases I have had good experience with integration over +/- 0.01 in z (which makes sense given the length scale in z of the problem [a few mm]). In fact I was able to run 500 cycles of the full problem (much more involved than the test case I posted) in a reasonable 10 min or so. Ultimately I want to let it run for a couple of 1000 cycles (maybe 20 min of time of the real physical system I am modelling). Fortunately, with this running in a closed feedback loop a failing of the integrator becomes obvious fairly quickly (run-away) as I have seen.

I did uncover another Maple "feature": I managed to parallelize the integrations since there are 4 integrations/cycle in the real application (with related and similar but not identical functions) using Threads:-Task:-Start. In Maple 15, that actually works (with apparently correct results) & gains maybe a 50% speed-up for two integrals/cycle. In Maple 17, it works also, but after that the next "restart" hangs the Maple kernel. A sad, reproducible regression.

Thanks again,

M.D.

@Axel Vogt Re-casting the integrand in a different form is an idea I like & will explore. The reason I need to run thousands is because this is a part of a simulation for a feedback mechanism. That mechamism maximizes the integrand by varying certain ones of the other parameters. On top; there are actually 3 such relations, each one optimizing a different parameter. The integrals represent the "plant" in the feedback system; in real life these will be colliding particle beams. I did  once run it for 2000 periods (corresponding to about 10 min of time for the real process) which ended up taking a whole weekend. Not acceptable if one wants to be able to tune the whole feedback process. This is not college homework.

Anyway, thanks much for looking into this & giving me some ideas to pursue.

Mac Dude

 

@Axel Vogt Re-casting the integrand in a different form is an idea I like & will explore. The reason I need to run thousands is because this is a part of a simulation for a feedback mechanism. That mechamism maximizes the integrand by varying certain ones of the other parameters. On top; there are actually 3 such relations, each one optimizing a different parameter. The integrals represent the "plant" in the feedback system; in real life these will be colliding particle beams. I did  once run it for 2000 periods (corresponding to about 10 min of time for the real process) which ended up taking a whole weekend. Not acceptable if one wants to be able to tune the whole feedback process. This is not college homework.

Anyway, thanks much for looking into this & giving me some ideas to pursue.

Mac Dude

 

Axel, you are absolutely correct. However, the length of the non-zero part along z changes with the parameters given, which in the real application can change a bit. Integration from -1..1 would be safe for all parameters I encounter but the integral does not always get eval'd right over that range (by _d01akc). Integration over -0.01..0.01 seems to always work but may lead to errors for certain parameter combinations.

But what really puzzles me is the increase in time of _Sinc. Also, it is sort-of annoying that _d01amc (which should integrate over the whole axis) fails for this one.

Thanks,

M.D.

Axel, you are absolutely correct. However, the length of the non-zero part along z changes with the parameters given, which in the real application can change a bit. Integration from -1..1 would be safe for all parameters I encounter but the integral does not always get eval'd right over that range (by _d01akc). Integration over -0.01..0.01 seems to always work but may lead to errors for certain parameter combinations.

But what really puzzles me is the increase in time of _Sinc. Also, it is sort-of annoying that _d01amc (which should integrate over the whole axis) fails for this one.

Thanks,

M.D.

What exactly is your problem here? The code you posted produces a graph with 4 curves. They are disjoint because that's what they are. You can play around with log plotting if you want them closer together.

Mac Dude

Somehow I fail to see the benefit of such comparisons. Maple and Mma have different strengths and to a certain extent a different structure. It is likely that a Mma expert may not know Maple that deeply and miss features and efficient Maple constructs; and vice versa.

A case in point: I recently converted a simulation code originally written in Mma. Once I was done; the originally fairly nice looking code looked rather ugly although it does work. The Maple version appears  to be quite a bit slower than the Mma original (by some factor).

Do I conclude  from that that Mma is better or faster? Not at all! In the conversion I tried to maintain the structure as much as I could. Had I started it from scratch I most likely would have done it differently & possibly more efficient in Maple. Also; I am still on the learning curve so I probably do things sub-optimal also. Clearly I will not expect Mma people to be equally proficient in Maple.

Having said this; I do feel Maplesoft could be more responsive to users and improve their QA. Maple has numerous silly but annoying bugs and I do not see these addressed in a timely way; some should have never been shipped in the first place.

Mac Dude

I would love to have a split screen feature in the Maple GUI. Excel has it, Pages has it, most editors have it. Heck, even Emacs has it.

Mac Dude

acer, Carl,

The file is just an ASCII file with space-delimited columns of floating pt numbers plus one column which has letters (which are identifiers). I am primarily interested in the numbers, to do some arithmetic on & plot various combinations.

acer's suggestion works. What makes me slightly unhappy is that reading in the file as "anything" takes several times longer than reading in as float or as string (about 1 sec vs maybe 4 secs). Ideally I could read in as string and then do Vector(V,datatype=float); but that does not work with V being strings.

So I have a solution but it is not the most efficient. Since the ImportMatrix command is a part of a customized read command I now use a try...catch block to read-in those data files as floats where only numbers are present and only use the more general "anything" where needed because of non-numerical columns being present. This sort-of works. But somehow it would be much nicer to be able to read-in as string (fast) and then convert.

The operation I referred to that fails was to read-in as "anything" and the just use the generated variables in formulae. Although; that behaviour seems inconsistent: this morning Maple merrily plots columns that are "anything" which it refused to plot yesterday (and all my sheets start with the oligatory restart; & I typically eval the whole sheet). To be clear; the columns with non-numeric data are not further used at present; I just do not want to have to convert each data file to get rid of them.

Thanks for your help,

M.D.

acer, Carl,

The file is just an ASCII file with space-delimited columns of floating pt numbers plus one column which has letters (which are identifiers). I am primarily interested in the numbers, to do some arithmetic on & plot various combinations.

acer's suggestion works. What makes me slightly unhappy is that reading in the file as "anything" takes several times longer than reading in as float or as string (about 1 sec vs maybe 4 secs). Ideally I could read in as string and then do Vector(V,datatype=float); but that does not work with V being strings.

So I have a solution but it is not the most efficient. Since the ImportMatrix command is a part of a customized read command I now use a try...catch block to read-in those data files as floats where only numbers are present and only use the more general "anything" where needed because of non-numerical columns being present. This sort-of works. But somehow it would be much nicer to be able to read-in as string (fast) and then convert.

The operation I referred to that fails was to read-in as "anything" and the just use the generated variables in formulae. Although; that behaviour seems inconsistent: this morning Maple merrily plots columns that are "anything" which it refused to plot yesterday (and all my sheets start with the oligatory restart; & I typically eval the whole sheet). To be clear; the columns with non-numeric data are not further used at present; I just do not want to have to convert each data file to get rid of them.

Thanks for your help,

M.D.

First 32 33 34 35 36 37 38 Last Page 34 of 42