acer

32353 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Markiyan Hirnyk Windows 7 Pro, 64 bit Maple 15.01, Standard GUI, Worksheet,

restart:
expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

plot(z->evalf[1000](subs(Sigma=z,lhs(expr))), -69.4..-69.0);

But it doesn't matter whether the plot is refined enough (small granularity) to detect it or not. Once you suspect that it's there, confirming it by other means is not hard.

It doesn't look like Digits must be very high, to find it.

restart:

Digits:=20:

expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

sol1:=fsolve(lhs(expr),Sigma=-100-10*I..100+10*I,complex);

                 -69.170832173780333987 + 0. I

Raising Digits very high serves to corroborate that it truly is an actual real-valued root.

acer

@Markiyan Hirnyk Windows 7 Pro, 64 bit Maple 15.01, Standard GUI, Worksheet,

restart:
expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

plot(z->evalf[1000](subs(Sigma=z,lhs(expr))), -69.4..-69.0);

But it doesn't matter whether the plot is refined enough (small granularity) to detect it or not. Once you suspect that it's there, confirming it by other means is not hard.

It doesn't look like Digits must be very high, to find it.

restart:

Digits:=20:

expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

sol1:=fsolve(lhs(expr),Sigma=-100-10*I..100+10*I,complex);

                 -69.170832173780333987 + 0. I

Raising Digits very high serves to corroborate that it truly is an actual real-valued root.

acer

@DJKeenan

If this is causing you huge grief, then you could of course write your own gradient procedure which did numeric estimation (differencing, say) without its getting too worried about precision (or accuracy).

I think I see what you mean: why isn't there a relationship between Optimization's 'optimality` or other tolerances and the precision demanded for computing the gradient. Sounds like a good question, though in practice it may depend on quantitative qualities of the specific objective function.

Optimization is invoking `fdiff` without supplying fdiff's workprec=n option, thus allowing its default behaviour. But that optional argument just controls the factor by which fdiff might add even more guard digits -- the lowest value is the default, workprec=1.

Note that Optimization has already set Digits=15 and so that is what fdiff sees as the inbound value of Digits, rather than the original session default value of Digits=10. When Digits comes in at 10, then fdiff raises it to 17.

Glancing over showstat(fdiff) it looks like it might even add guard digits on top of whatever workprec=n specifies (I'm not sure).

I've haven't looked very hard at fdiff, but writing a numeric routine which tries to handle as many kinds of examples as possible, with as little intervention by the user to tweak optional controls (tolerances, etc), is difficult.

There are other known instances where nested calls to Library routines cause a cascade, each one trying to be clever and augment Digits by guard digits. Some routines like dsolve/numeric and fsolve seem to have some environment variables to help prevent this, by allowing them each to test whether they have been called from within one another. (Something about this mechanism makes me uneasy. What if there are ten such routines, with different rules for one another?)

One reason this is a big deal is that, for things like evalf/Int, the value of Digits can dictate whether evalhf or double-precision external-calling gets used. So keeping Digits at 15 can sometimes mean the difference between good and bad performance. (This might mean that it's at least problematic for fdiff to raise Digits from session default value 10 to 17.)

It sounds tricky, to get this right and make as many examples work best (and fastest also). Maybe Optimization should temporarily reduce Digits (eg, invert what fdiff will itself do! Invert its formula, to set Digits=8 when Digits=15. Sheesh). Or maybe the "best" thing is to expose everything for user control, allowing a new optional argument like digits=value and fdiffoptions=[...]. Some people are bound to find that too onerous.

But that's the rub. Since every numeric method can usually be broken by some tricky example, numerically, then exposure of all tolerances and controls is often the Best Way to go. Trying to handle it all, magically and invisibly, is a hard game.

acer

 

@DJKeenan

If this is causing you huge grief, then you could of course write your own gradient procedure which did numeric estimation (differencing, say) without its getting too worried about precision (or accuracy).

I think I see what you mean: why isn't there a relationship between Optimization's 'optimality` or other tolerances and the precision demanded for computing the gradient. Sounds like a good question, though in practice it may depend on quantitative qualities of the specific objective function.

Optimization is invoking `fdiff` without supplying fdiff's workprec=n option, thus allowing its default behaviour. But that optional argument just controls the factor by which fdiff might add even more guard digits -- the lowest value is the default, workprec=1.

Note that Optimization has already set Digits=15 and so that is what fdiff sees as the inbound value of Digits, rather than the original session default value of Digits=10. When Digits comes in at 10, then fdiff raises it to 17.

Glancing over showstat(fdiff) it looks like it might even add guard digits on top of whatever workprec=n specifies (I'm not sure).

I've haven't looked very hard at fdiff, but writing a numeric routine which tries to handle as many kinds of examples as possible, with as little intervention by the user to tweak optional controls (tolerances, etc), is difficult.

There are other known instances where nested calls to Library routines cause a cascade, each one trying to be clever and augment Digits by guard digits. Some routines like dsolve/numeric and fsolve seem to have some environment variables to help prevent this, by allowing them each to test whether they have been called from within one another. (Something about this mechanism makes me uneasy. What if there are ten such routines, with different rules for one another?)

One reason this is a big deal is that, for things like evalf/Int, the value of Digits can dictate whether evalhf or double-precision external-calling gets used. So keeping Digits at 15 can sometimes mean the difference between good and bad performance. (This might mean that it's at least problematic for fdiff to raise Digits from session default value 10 to 17.)

It sounds tricky, to get this right and make as many examples work best (and fastest also). Maybe Optimization should temporarily reduce Digits (eg, invert what fdiff will itself do! Invert its formula, to set Digits=8 when Digits=15. Sheesh). Or maybe the "best" thing is to expose everything for user control, allowing a new optional argument like digits=value and fdiffoptions=[...]. Some people are bound to find that too onerous.

But that's the rub. Since every numeric method can usually be broken by some tricky example, numerically, then exposure of all tolerances and controls is often the Best Way to go. Trying to handle it all, magically and invisibly, is a hard game.

acer

 

Things like this below are worrisome, indicating that codegen[GRADIENT] may be problematic for some innocuous-looking objective procedures.

> restart:
> f := proc(x)
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # ok

                          proc(x) return 2*x end proc;

> evalhf( %(4.5) );

                                     9.


> restart:
> f := proc(x)
>     print(f);
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # ok

                    proc(x) print(f); return 2*x; end proc;

> evalhf( %(4.5) );

                             4.50000000000000000
                                     9.

> restart:
> f := proc(x)
>     [];  # just another no-op, you'd imagine
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # jeepers

                          proc(x) return  end proc;

> evalhf( %(4.5) );

                              Float(undefined)

The last example was an objective that itself was non-evalhfable. But the following example is more troublesome, and produces a wrong numeric result for the gradient.

> restart:

> f := proc(x)
>     5.7;  # not an assignment, but also not a return value!
>     x^2;
> end proc:

> codegen[GRADIENT](f);
                           proc(x) return 0 end proc

> %(4.5);
                                       0

All this means that `fdiff` may be sometimes be more attractive as a means to get a useful gradient (even if numerical diferentiation is "frowned upon" on stability grounds).

The procedure in the top-post could be changed to automatically handle an (indeterminate) variable number of arguments (to the original objective). It could also be altered to supply optional arguments to `fdiff`, such as the 'workprec'=n argument which controls the working precision that `fdiff sets for itself internally. This would help in the case that one wanted `fdiff` to not raise Digits too high (internally, temporarily) when calling the objective procedure for its computation of finite differences.

acer

@PatrickT It's great that you have a workaround. (Sorry for not mentioning the change I made to the quoting, too.)

I did not know that the Standard driver's `plotoptions` height and width options even worked at all, until trying it without the `pt` units appended today. I suspect that this might be a secret that unlocks its usefulness to quite a few people (since it means that it's not 100% broken, even if its functionality is very much obscured).

I have taken the great liberty of branching your followup, as a post in its own right. I agree with your that it is a very important topic, and should be addressed with great seriousness. I expect that several people will add their own additional commentary to it, so perhaps its best to have it be separate.

acer

@PatrickT It's great that you have a workaround. (Sorry for not mentioning the change I made to the quoting, too.)

I did not know that the Standard driver's `plotoptions` height and width options even worked at all, until trying it without the `pt` units appended today. I suspect that this might be a secret that unlocks its usefulness to quite a few people (since it means that it's not 100% broken, even if its functionality is very much obscured).

I have taken the great liberty of branching your followup, as a post in its own right. I agree with your that it is a very important topic, and should be addressed with great seriousness. I expect that several people will add their own additional commentary to it, so perhaps its best to have it be separate.

acer

The expression is not real-valued for cp>-3220 and cp<3220, and moreover the imaginary component is nonzero for the region cp>0, cp<3220 and x>0, x<14. So the equation expression=0 does not hold in that region, and thus the implicit plot is empty there.

expr:=tan(x*sqrt(cp^2/3220^2-1))*(cp^2/3220^2-2)^2
      +4*tan(x*sqrt(cp^2/6450^2-1))*sqrt((cp^2/3220^2-1)*(cp^2/6450^2-1));

plot3d(Im(expr),x = 0 .. 14, cp = 0 .. 7000,axes=box);

acer

The expression is not real-valued for cp>-3220 and cp<3220, and moreover the imaginary component is nonzero for the region cp>0, cp<3220 and x>0, x<14. So the equation expression=0 does not hold in that region, and thus the implicit plot is empty there.

expr:=tan(x*sqrt(cp^2/3220^2-1))*(cp^2/3220^2-2)^2
      +4*tan(x*sqrt(cp^2/6450^2-1))*sqrt((cp^2/3220^2-1)*(cp^2/6450^2-1));

plot3d(Im(expr),x = 0 .. 14, cp = 0 .. 7000,axes=box);

acer

@PatrickT How about changing one line to,

for j from 2 to 51 do R[j]:= X[trunc(R[j-1])][j] end do:

@PatrickT How about changing one line to,

for j from 2 to 51 do R[j]:= X[trunc(R[j-1])][j] end do:

The Mapleprimes profile page for the author, S.Arlou, has incorrect links to his Posts and Questions pages.

Currently, they incorrectly point at ...Posts from S.Arlou and ...Questions from S.Arlou. But that is wrong, as he is not user 0, and those links are stale.

The correct links that should be on his Profile page are ...Posts from S.Arlou and ...Questions from S.Arlou.

acer

@Samir Khan Thanks for clarifying. So the App Center is misleading, by labeling them as for M13 and M15. One is simply a correction and update of the other. (Why would that site retain the first one? Why name have the version numbers in their pages, which is misleading?)

Christopher, I can try the 32bit Maple 12 on Windows this evening.

Maybe there is another explanation of why it doesn't work for you? How about a socket timeout? Is the code robust for that, I wonder. You're using a 14.4kbps dial-up or something, at times, yes?

The second version of that stock quote importer, ImportingYahooStockQ.mw from here, seems to work ok in the X86_64_WiNDOWS versions of both Maple 12.02 as well as 15.01 on Windows 7 Pro.

I haven't checked, but what is the nature of the difference between those two versions of that Importer? Is the difference that one is (truly, only, and merely) intended for M13 and the other (truly, only, and merely) for M15? Or could it instead be that the difference is that the older one is merely labeled for M13 and tries to connect to a now-invalid URL, while the newer versionis merely labeled for M15 and connects to some newer yahoo URL? Or some minor variation on that second possibility?

What's you platform? 32bit XP? Other? Perhaps someone can confirm whether the v2 of that importer works or not on your platform.

acer

@luigidavinci The size 2000x2000 is not very large, and should fit into about 32MB of memory for a non-sparse storage=rectangular and datatype=float[8] Matrix.

How fast do you need the computation to be? That is to say, is it speed or memory constraints that is your more critical issue?

If you just want a few smallest eigenvalues plus associated eigenvectors then you could try the link I gave above, to a wrapperless external-call to compute "selected" eigenvectors. It might not be as fast as a dedicated routine for handling a significantly sparse system. And it really only speeds up the subsequent computation of the subset of eigenvectors. The initial eigenvalue computation step likely take just as much time as for computing them all. That, combined with its being for full storage rather than sparse storage, might well mean that it is not at all the functionality that you're after. But I felt I should mention it. I believe that there is a short paragraph in that post detailing how to invoke the attached code, so as to compute eigenvectors for only a specific number of smallest eigenvalues.

Do you recall which F12 or ARPACK function it is which will more quickly compute (only, a selected number of) smallest eigenavlues?

 

First 430 431 432 433 434 435 436 Last Page 432 of 592