tomleslie

13876 Reputation

20 Badges

15 years, 177 days

MaplePrimes Activity


These are replies submitted by tomleslie

  1. Your ODEs use the function epsilon(rho), which has a piecewise definition valid for values of rho in the range 1.38676395e10..1.898465e15. You then *seem* to want to set rhoList := [seq(200*j, j = 1 .. 5)] followed by rho=rhoList[i]. So rhoList will contain [200, 400, 600, 800, 1000] and epsilon(rho) is  undefined for any/all of these values. So your ODEs will contain an undefined value and thus cannot be solved.
  2. As a general rule, if you decide to use 2-D math input, then you should be very careful about using any knid of indentation in your code. As an example, when you write

dsolve
(....)

The 2-D parser will interpret this as dsolve (....) - with a space between the 'dsolve' and the argument list. The 2D-parser will then interpret this as an "implied mulitplication", and you end up with dsolve*(...) - which certainly isn't what you want.

You have a choice: write code using 1-D input with as much indentation as you want. Or use 2-D input with no indentation (at least where that indentation might be misinterpreted!)

restart;
ans1:=7/9;
ans2:=1/4;
ansStr:=StringTools[Join]( [ convert(ans1, string),
                                                convert(ans2, string)
                                             ],
                                             ";"
                                          );
XMLTools[Print](MathML[Export](ansStr));

maybe?

I thought my original suggestion of rounding error, solved by increasing the setting of Digits had eliminated the problem of inconsistent resusts - so Acer and I went off on a bit of a tangent arguing about which was the "fastest" method. Mildly interesting for us - but not for you

Reading your latest post - even with Digits=50 (or greater) you are still getting inconsistent results. Downloading and executing your latest worksheet JJ6.mw, I get the following


 

NULL

"restart;  Digits:=50:  r:=2.8749:a:=0.7747:b:=0.3812:B:=29000:R:=5.4813:Z:=2 :A:=17.4:  J(n,phi):= 8* Pi^(3/2)*r*R*(Sum((2* r*R)^(2* i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2* j)*(Sum((2* r*cos(phi))^(2* l)*pochhammer(n+2* i, 2* l)*hypergeom([2* j+2* l+1, .5], [2* j+2* l+1.5], -1)*(.5   *Beta(l+.5, n+2* i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2* l+1)*hypergeom([-n-2* i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2* l+1))/(factorial(2* l)*pochhammer(2* j+2* l+1, .5)*(R^2+r^2)^(n+2* i+l-.5)),   l = 0 .. 100))/(factorial(i-j)*factorial(j)),j = 0 .. i))/factorial(i), i = 0 .. 100)):  evalf(a*b*(-A*J(3,Pi/(6))+B*J(6,Pi/(6))))"

.6541254113391928082789597628199217731335551498693

(1)

"restart;  Digits:=50:  r:=2.8749:a:=0.7747:b:=0.3812:B:=29000:R:=5.4813:Z:=2 :A:=17.4:  J(n,phi):= 8* Pi^(3/2)*r*R*(Sum((2* r*R)^(2* i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2* j)*(Sum((2* r*cos(phi))^(2* l)*pochhammer(n+2* i, 2* l)*hypergeom([2* j+2* l+1, .5], [2* j+2* l+1.5], -1)*(.5   *Beta(l+.5, n+2* i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2* l+1)*hypergeom([-n-2* i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2* l+1))/(factorial(2* l)*pochhammer(2* j+2* l+1, .5)*(R^2+r^2)^(n+2* i+l-.5)),   l = 0 .. 100))/(factorial(i-j)*factorial(j)),j = 0 .. i))/factorial(i), i = 0 .. 100)):  evalf(a*b*(-A*J(3,Pi/(6))+B*J(6,Pi/(6))))"

.6541254113391928082789597628199217731335551498693

(2)

restart; Digits := 70; r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4; J := unapply(8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. 100))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. 100)), [n, phi]); evalf(a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi)))

.654125411339192808278959762829393274107936440394141607440123001148130

(3)

NULL

restart; Digits := 50; r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4; J := unapply(8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. 100))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. 100)), [n, phi]); evalf(a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi)))

.6541254113391928082789597628199217731335551498693

(4)

NULL


In other words - I get the same answer (at least to 25 digits or so) every time.

The only way I can think of for you to get inconsistent results and me/Acer to get consistent ones is that you are using either a different Maple version or operating system.

  1. I obtained the above output running 64-bit Maple 2016.2 on 64-bit Windows 7
  2. Acer and I may (still) disagree on cputiming but (s)he gets the same result as I do using 64-bit Maple 2016.2 on a Linux (flavour unspecified) machine
  3. So precisely which Maple version are you running on what OS.
  4. Since Acer and I get the same, consistent results on two completely different OSes, I can only think that what you are seeing is a Maple version problem?

Consider only the case Digits=30, upper sum limit=70.

In my previous post I got a cputime of 43.26secs using the nested add() approach and 59.58sec for the nested Sum()

Converting these to use explict proc() calls, I now get 84.91secs for the nested add() and 59.4secs for the nested Sum()

So converting from unapply(...) to proc,,endproc doubles the time taken with nested add() and has little or no effect, with nested Sum()

I now have no idea what is happening or why, so I'm going for a liedown

You state:

Also, you miss the key point about using `Sum` here, which ends up being quite faster than `add`.  Inert `Sum` is NOT just for programmatic manipulation or evaluating via `value` (which turns it into active `sum`). Wrapping `Sum` with `evalf` causes special purpose routine `evalf/Sum` to be invoked. That uses a so-called Levin transformation to detect convergence of the summation based on a finite number of terms, and exit earlier (ie. accelerated) yet accurately.

I beg to disagree. The manual entry (with my highlighting) for evalf/Sum states

Method

  • • In the case of finite sums, or sums over RootOfs, the sum is computed directly through numerical evaluation of the summand.
  • In the case of infinite sums, Levin's u-transform is used, which has the additional effect that sums that formally diverge may return a result that can be interpreted as evaluation of the analytic extension of the series for the sum (see the examples below).

Since the OP has finite sums, the first of the above methods will be used. Therefore your remarks about 'Levin's transformation' etc would seem to be inappropriate(?)

You also state that the "advantages" of evalf/Sum(() over add() are demonstrated in your response. I cannot find the "apples-to-apples" comparison in your worksheet: ie same number of terms, same Setting of Digits, same level of "simplification". So I tried this myself for two cases. The first case is with Digits=50, and upper sum index=100. The second case is with Digits=30 and upper sum index =70. There are two 'commands' to time: setting up the function J() and evaluating the actual result. See attached worksheet, saved with output or the summary plus comments below

consistency2.mw

All calculations performed using 64-bit Maple 2016.2, running on 64-bit Windows 7, with 16GB of ram

Case1: Digits=50, upper sum index =100

Using add(), timings for two stages of calculation are

memory used=10.61GiB, alloc change=0.67GiB, cpu time=80.36s, real time=76.56s, gc time=6.55s

memory used=6.53GiB, alloc change=448.00MiB, cpu time=74.04s, real time=54.49s, gc time=24.41s

Using evalf/Sum, timings for two stages of calculation are

memory used=485.20KiB, alloc change=0 bytes, cpu time=0ns, real time=4.00ms, gc time=0ns

memory used=40.43GiB, alloc change=4.00MiB, cpu time=5.52m, real time=5.52m, gc time=15.29s

So using the inert Sum command is much faster when setting up the function J() - no real surprise. However performing the final calculation is noticeably slower, when using evalf/Sum. Adding the cputimes for the two stages, total time for the add() approach is 154.4secs: total time for evalf/Sum is 5.52m , ie 331.2sec. So overall the evalf/Sum approach is slower by about a factor of 2

I don't actually know why evalf/Sum should be so much slower (to be honest I would have expected them to be about the same), but I suspect that it may have something to do with the memory used - evalf/Sum uses 485.20KiB+40.43GiB, whereas nested add() uses 10.61GiB+6.53GiB, substantially less

Case2: Digits=30, upper sum index =70

Using add(), timings for two stages of calculation are

memory used=3.06GiB, alloc change=172.01MiB, cpu time=23.48s, real time=22.67s, gc time=1.59s

memory used=2.12GiB, alloc change=384.00MiB, cpu time=19.78s, real time=17.21s, gc time=3.79s

Using evalf/Sum, timings for two stages of calculation are

memory used=462.99KiB, alloc change=0 bytes, cpu time=0ns, real time=5.00ms, gc time=0ns

memory used=7.03GiB, alloc change=4.00MiB, cpu time=59.58s, real time=59.57s, gc time=2.62s

Adding the cputimes for the add() approach gives 43.26secs, and for the evalf/Sum approach gives 59.58secs - so add() is still faster, but now only by about 35%. Interestingly(?) the discrepancy in memory used has also decreased substantially (3.06GiB+2.12GiB versus 462.99KiB+7.03GiB)

I originally suggested that the OP's inconsistency in reported results was a result of rounding error. I stand by this observation.

When posting my 'solution' (ie set Digits=50), I paid little heed to execution time, although I did point out that Digits=50 *might* be excessive. Digits =10 (ie default) is obviously not OK, but Digits=15,20,25,30,35,40,45 etc, I simply couldn't be bothered to check.

I considered that the required value of Digits was *probably* somewhat dependent on the parameter values r ; a ; b; B; R ; Z ; A. So determining a value of Digits for a given set of r ; a ; b; B; R ; Z ; A was probably a little pointless: a slightly different set of parameters might require Digits to be increased by 5 ( or possibly decreased by 5) - so I just didn't want to go there!

Having got that of my chest, reading the rest of the posts there appear to be four issues

  1. The OP does not understand rounding error, and its 'propagation'
  2. Should the OP use add(), sum(), or Sum() in the definition of the function J()?
  3. How many Digits are actually required?
  4. How many terms in the summations are actually necessary?

So I'm going to add my $0.02 to each of these four points in turn

###################################################################

Rounding Error

If one runs the code

calc1:= acc->evalf[acc](1.0e09)*(evalf[acc](3.14159)-evalf[acc](Pi)):
seq( calc1(x), x=7..10);

Then one is essentially performing the 'same' calculation with Digits=7,8,9,10 - and one obtains four different answers. Hold the thought: the same calculation, with performed with different precision settings leads to different answers. Combine a few thousand such "simple" calculations and the result could be almost anything!

#####################################################################

sum(), add(), Sum()

sum() means that Maple will attempt to find a 'closed-form' expression for a summation. If a 'close-form' expression can be found, then subsequent evaluations of the sum() will be rapid. For example simplify(sum(x*i, i=1...n)) will return (1/2)*x*n*(n+1), and evaluating the latter for any values of x, n, is trivial. However many sums do not have such a 'closed-form' solution. In such cases, for finite sums, Maple will revert to using add() - see later

add() makes no attempt to generate a 'closed-form' solution, so will generally be faster than sum() when no 'closed-form' solution exists. (It may even be faster when a closed form solution does exist!)

Sum() is inert - it does nothing. Until later if one actually needs an evaluation, at which point sum() will be utilised (see above), which might (or might not) eventually end up with calling add()B

In the OP's case, no 'closed-form' solution for the sums involved can be found by Maple, so eventually add() will be called. So the general recommendation would be avoid the overhead of sum() or Sum() and just use add()

#######################################################################

Digits

Obviously the default setting (ie 10) for Digits is not sufficient. My original suggestion of using Digits=50 is probably(?) overkill. However I know of no way to determine what value of Digits is *sufficient* for a given calculation. John May has suggested earlier that Digits=30 seems to be sufficient. I have no argument with this - just the warning that if the OP changes some of the parameters of the problem, then the required value of Digits may increase (or decrease!). So far as I am concerned this can only be determine "experimentally"

#########################################################################

Number of summation terms

OP's original post had the upper limit =100 on all three sums, whihc means that 1000000 terms have to be computed an added. There was no indication in the OP's original post that this value could be varied. It is not obvious to me that the magnidue of summation terms decreases as summation index increases. However if this is the case then reducing the number of terms in the sums may achieve the same accuracy much faster. Reducing the number of term in each sum to 70 means that only 343000 terms need to be computed and added. Rather obviously, this will be about 3x faster than using 1000000 terms (ie upper limit of sums=100. Once again I see no way of determining in advance whether the number of term in the sums can be reduced: this can only be assessed experimentally ( and again may be dependent on the input parameter values

 

I constructed my own test example, just becuase I wanted be be *absolutely* sure that the same code was being cacluated each time.

On default Digits setting (ie 10), answers varied more or less randomly.

Setting Digits=50, answer *seem* to be consistent (at least the first 21 of the 50 digits agree) . Results obtained this way bear no relation to the results obtained on the default digits setting:-(

Using Digits =50, may(?) be excessive

See the attached, where I have save the file with the results I obtained

consistency.mw

I used the free Maple add-on package 'DirectSearch' whihc provides more extensive optimization methods than those built-in to Maple. I'm assuming that you don't have this package installed, so you wouldn't be able to run my worksheet. However the "highlights" are

Using a brute force search for a global maximum, with

DirectSearch[Search](rhs(sol2), {theta=0..1.0, T>0, W>0, E>0, tp>0}, maximize=true);

returns

[9.60488509346773*10^33, [E = .276779420670101, T = 6.75538764002731, W = 2.51291785001108, theta = 1.00000000000000, tp = .848414169327298], 699]

The first entry in the above list is the maximum value obtained for rhs(sol2), the second entry is the list of values for the variables which produces this maximum. Note however that this 'maximum is obtained right at the end-point of the constraint on theta. Not too surprising, since as theta->1 there are a couple of terms in the equation for Tp which will become infinite.

I tried reducing the maximum allowed value of theta with

DirectSearch[Search](rhs(sol2), {theta=0..0.95, T>0, W>0, E>0, tp>0}, maximize=true);

which returned

[8.71146928133229*10^5, [E = 2.28714951791024*10^(-28), T = 3.96599424741180, W = 32.7889022679273, theta = .949999999999999, tp = .502771513016520], 457]

Notice that the returned maximum is much lower than in the first case, but the theta-value is still at the right-hand  end of its constraint.

The fact that the function maximum seems to be obtained at one end of the theta-constraint suggest that this function is not concave

I tried to check this out by solving the set of first derivatives with

DirectSearch[SolveEquations]( [ diff(rhs(sol2),T)= 0,
                                                    diff(rhs(sol2),E)= 0,
                                                    diff(rhs(sol2),W)= 0,
                                                    diff(rhs(sol2),theta)= 0,
                                                    diff(rhs(sol2),tp)= 0 ],
                                                    [theta=0..1, T>0, W>0, E>0, tp>0]
                                                );

which returns

[11.0649249099988,
  Vector[column](5, [-.108335884623482, -.312331457804231, 1.64870757071341,
                                     1.47852831783600, 2.45995013490047]),
[E = .612897417882649, T = .410079900784641, W = .965796790337650, theta = 0.950250879258380e-1, tp = 2.01880216861331*10^(-10)]

The third entry in this list is values for each of the variables

The second entry in this result is the vector of 'residuals'. in other words the 'closest' DirectSearch can get to solving these equations is actually

diff(rhs(sol2),T)= -.108335884623482
diff(rhs(sol2),E)=-.312331457804231
diff(rhs(sol2),W)= 1.64870757071341
diff(rhs(sol2),theta)= 1.47852831783600
diff(rhs(sol2),tp)=2.45995013490047

The first entry in the returned solution (ie 11.0649249099988) is just the sum of the squares of these residual terms.  Ideally this would be 0. A 'numerically acceptable' solution would normally be where the sun-of-the-squares of the residuals would be very much less than the values of any of the variables (because if all of the variables were (say) Order(10^6), then a residual error of ~11 would be pretty good).

The fact that neither of the above approaches produces what I would consider to be a "satisfactory" solution sttrongly suggests that your original problem is not 'concave'.

The only suggestions I can make at this point are

  1. If this is a "real-world" problem where the variables E, T, W, theta, tp, have "meaning" then it may be possible to seriously tighten the constraints on these variables. This *might* restrict the domain of the problem to a region which is concave
  2. Double-check the original equation for any typos.

 

 

 

The OP's original post was not 'code' or a 'worksheet' but a 'picture' of code. Even worse (for whatever reason) the text in the provided 'picture' was tiny. Even after loading it into a graphics package, enlarging and sharpening, it was barely readable -so I didn't fancy retyping it. So my original response to this problem was to suggest that the OP upload some kind of usable code.

Now the OP has done this by editing his/her original post - rather than uploading a new post in accordance with my suggestion. My view is that editing of 'original' questions should not be possible - but I guess I can live with it

What really gets me, is that my suggestion to upload the problem in a readable/usable manner no longer exists! Exactly who has the authority to delete my (more or less sensible) suggestions???

Try downloading your own "worksheet"

question_maple.mw

from this site and then execute it. You are now in the same situation as anyone here trying to respond. We have to be able to do more than just read the output which you have obtained!

Please if you want help, then upload a usable worksheet, or enter plaintext here as in your first post

Can't do anything with this post, because no way to access 'sol52' from the prettyprint.If you refuse to upload a usable worksheet, then upload the function sol52 and all parameters in plaintext, as in your first post

alpha:= 50; beta:= .7; c:= 20; h:= 4; m:= .4; o:= 10; p1:= 40; s:= 10; u:= 5; a:= 15000; b:= 2;
 TP:= (p1*(Q-q)+p1*(1-theta)*(q-E)+s*E-c*Q-o-h*((1/6)*alpha*W^beta*a*p1^(-b)*tp^3/m-(1/2)*alpha*W^beta*a*p1^(-b)*tp^2+Q*tp)-(t1-tp)*h*((1/2*(-(2/3)*t1+m-(1/3)*tp))*W^beta*a*alpha*(t1-tp)*(-p1*(-1+theta))^(-b)+W*m)/m-(1/2)*h*(W+E)*(T-t1))/T-u*W


indets(TP, 'name');

returns

{E, Q, T, W, q, t1, theta, tp}

What are 'q' and 't1' -parameters which have not been assigned a value? or more maximisation variables? or what?

Whilst I can appreciate that there may be some "ease-of-use" issues. you'd have to convince me that setting up a problem in a "stand-alone" geometry package and then "exporting" information from that package to Maple is actually a good idea. Why not just use the  geometry() and geom3d() packages in Maple??

I mean what "geonetry" problem are you trying to solve which cannot be addressed by the built-in Maple packages???

See your annotated code below

restart;
with(plots); with(DEtools);
`ε` := .1;
de1 := x[0](t)+`ε`*x[1](t);
#
# presumablly the above is intended to be 'ode1'
# rather than 'de1'
#

ode2 := sin(t)-`ε`*t*sin(t);
#
# Note that neither 'de1' nor 'ode2' are actually
# differential equations. They are simple equations involving
# the functions x[0](t), x[1](t) and the independent variable 't'
# Hence any attempt to use a differetnial equation solver is
# doomed to fail - becuase nothing anywhere is being differentiated!!!!
#

MODEL := {ode1, ode2};
VARS := {x(t), y(t)};
#
# There is no dependent variable y(t) involved anywhere in this
# code - so defining it as a variable to be solved for is doomed to
#

DOMAIN := t = 0 .. 20;
RANGE := x = -3 .. 3, y = -3 .. 3; COLORS := [BLACK, BLUE];
IC1 := [x[0](0) = 0, x[1](0) = 0]; IC2 := [(D(x[0]))(0) = 1, (D(x[1]))(0) = 0];
#
# Even if we had some differntial equations to solve (whihc we don't)
# the inclusion  of initial conditions below would be incorrect, since
# it would be a list of lists, rather than a simple list (or set)
#

DEplot(MODEL, VARS, DOMAIN, RANGE, [IC1, IC2], stepsize = .1, arrows = THIN, linecolor = COLORS);

Do I know why Maple is retuning the error message

Error, (in DEtools/DEplot/CheckInitial) the 'number' option must be specified before initial conditions

Errrr, no - I have no idea. Since there are so many syntax errors involved in the arguments supplied to DEPlot(), and there aren't even any differential equations!!!, I assume that DEPlot() is as confused as I am

A lot depends on whether you want to stick with 'DataFrame' representation, or whether a simple 'Matrix' would suffice. Since I have no idea what you are planning on doing with this inf,o I have no idea which is the better way ro store data. So the attached shows how to select the 'info you requie from both the 'Matrix' and 'Dataframe' representations

Tours2.mw

First 121 122 123 124 125 126 127 Last Page 123 of 207