acer

32385 Reputation

29 Badges

19 years, 339 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Alejandro Jakubi Thanks, but ScientificConstants has several times that much infomation on isotopes. For example,

restart:
with(ScientificConstants):
select(t->evalb(op(0,t)=H),convert([GetIsotopes()],`global`));
map(GetElement,%);

I have previously found several such partial sets of data at NIST and related sites. But what is needed, I suspect, is a much more full collection which is also named (so that it can be cited and referenced for comparison at a later date).

For example, there is some mention of a 2001 published data set here, with some later update here in 2005. The 2001 data might possibly be accesible only by subscription, and the 2005 updates might possibly have its central numbers available in the linked abstract. I'd like to hear an expert's opinion.

 

@Carl Love Hi Carl. Darin might have a good answer for you, but I'll chip in with some anecdotal evidence if that's OK.

I was using the Task model to split (halve) some of my embarrasingly parallelizable numeric escape-time fractal code. At first I imagined that I'd get optimal performance by just using numcpus to figure out the best base case. Ie, the code could split if the "current" size were not less than 1/numcpus times the original total size.

But in practice I found that the OS (64bit Windows and 64bit Linux) could ramp up more quickly if I instead used a value higher than numcpus. Both Linux `top` and Windows' Task Manager showed all cores getting to a higher load more quickly if the Maple Task mechanism was being instructed to split more times than just the value of numcpus. Eg, on an 8-core Intel i7 or a 4-core i5 I got a measurably better total real time for the entire computation if I made the code split until the size was say 1/15th to 1/20th of the original.

I'd be interested if anyone else had seen behaviour that was similar (or radically different).

In my experience the 2010 release of the CODATA collection of values for the fundamental physical constants was easily found on the web as a single plaintext file.

I once wrote a Maple routine which processed the CODATA 2010 .txt data file and saved the data into Maple using the ScientificConstants package. This was quite straightforward a task, given the single text file with the data.

But finding the latest data for isotopes (or nuclides) in a single collection that is recognized by NIST seems more difficult. Does anyone know the location of such a data set, as plaintext or XML?

acer

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

Answers about how to do this best (or perhaps just better) may depend on the particular nature of `a1`. Could you provide some details in the form of a fully functioning, explicit example?

acer

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

I googled kovacic algorithm and the first hit was this and most of the first page of hits seemed relevant. I mention that first hit because its references indicate a preprint (1979?) by Kovacic as well as an implementation by D.Saunders given at ACM 1981.

Another hit that stood out was this Maple help-page (even if it may differ in implementation), which cites Kovacic at end in its References section,

Kovacic, J. "An algorithm for solving second order linear homogeneous equations". J. Symb. Comp. Vol. 2. (1986): 3-43.

acer

@Carl Love I was not claiming that it is bug in 2-argument eval. I stated that `eval/if` relies on the behaviour of 2-argument eval.

I'm not sure that `eval/if` is quite right. There are other corners, too.

The routine `eval/if` is affected by the following behaviour of 2-argument `eval`,

> eval('sin(r)', r=0);

                                       0

> eval('sin(0)', r=0);

                                    sin(0)

Note that `seq` does not behave like that,

> seq('sin(r)', r=0); 

                                       0

> seq('sin(0)', r=0);

                                       0

acer

@Preben Alsholm Yes, thanks, that's why I included my second example. It's an oddity amongst oddities.

> restart:

> eval(`if`(r,sin(Pi),p),r=true); # hmm

                                    sin(Pi)

> f:=x->x:

> eval(`if`(r,f(r),p),r=true);

                                     true

> eval(`if`(r,f(2),p),r=true); # hmm

                                     f(2)

> seq(`if`(r,f(2),p),r=true);

                                       2

It looks like a bug.

acer

What is the purpose of the first loop in the SampleQ procedure (that loops for i from 1 to Size)?

acer

@AndreaAlp It wasn't clear to me what was your exact problem. I thought that perhaps you wanted emitted Matlab code that contained elementwise ~ operators, but weren't getting them.

Perhaps you could upload a short but representative example, so that it would be more clear what you need.

First 367 368 369 370 371 372 373 Last Page 369 of 592