## 62 Reputation

14 years, 47 days

## Now I get it....

I truly did read up on the error control section of the Maple help, but I did not get the full implications of this point until you posted this point here and I thought about it carefully. The point is that if you don't set BOTH error tolerances, both dverk78 and rkf45 will wind up going by the larger one since it will dominate. I was puzzled about this before but I get it now. Thank you for making that clear and also thank you for all these cool variations! I will try gear...would love to know more about how it works...I know of Richardson extrapolation but that is for numerical differentiation in my experience...ah well, my experience is limited!

## Advantages of dverk78 over rkf45 (as I s...

Since I got help on my basic procedure from Allan Wittkopf, the efficiency of my overall algorithm has improved enormously. I can do in a few hours what used to take two days. However, I DID change one thing...I switched back to dverk78 from rkf45. I get that dverk78 does not have the range option or the piecewise option and that these options can make rkf45 more efficient in some ways. However, for what I am trying to do, rkf45 seems to be not as accurate and it crashes. dverk78 seems to work quite smoothly and also quite fast. From what I understand, dverk78 and rkf45 are each Runge-Kutta methods; it is just that dverk78 is higher order and so involves more function evaluations. But presumably it makes up for that by enabling you to use a larger stepsize (like Simpson's rule is better than the trapezoid rule even though the formulas are messier.) Is it possible that a future version of Maple might include the piecewise and range options in dverk78 too? The key here seems to have been to set relerr and abserr each to 1e-10 and lower the number of digits of precision so machine precision could be used. One thing I don't really understand is why it is necessary to set both of these, but it obviously is. Thanks again for all your help!

## Wow, thank you so much....

As I say, thank you so much. I have not had a chance to try this out yet, but I will let you know how it goes. I guess the simplest answer to why I had it set up the way I did was that I didn't know any better; I was pleased just to get a version of this that ran at all and seemed to be producing reasonable results. Thanks again! Joe Harris.

## Differences...

Thank you very much! I am afraid I was somewhat confused when I made this post; the real difference seems to lie between Maple 11 and Maple 13. Results calculated on Linux and results calculated on the Windows 7 RC in Maple 13 seem to agree. It may well be that not every decimal place is the same, but to many decimal places they are the same. However, results calculated in Maple 11 and results calculated in Maple 13 differ by as much as 6%. I am using the exact same file; just copying from computer to computer. In another post I give my procedure and the input and the output for Maple 11 and Maple 13. Thank you again!

## Results....

Here you are; thank you very much for asking. Here is all the code for gradPhi4: with(LinearAlgebra): > Digits:=25; > gradPhi4:= proc (P0,Gamma,Omega,lambda); > A:= 1e-7; > B:= 1e4; > ics:= P(0)=P0, L(0)=B*lambda, W(0)=0; > sys_Ode:= diff(P(t),t)= A*Gamma*P(t)*(L(t)-P(t))-A*Omega*W(t)*P(t),diff(L(t),t)= -P(t),diff(W(t),t)=P(t); > sol:= dsolve([sys_Ode,ics],type=numeric,method=dverk78,abserr=1e-15,output=listprocedure); > P:=rhs(sol); L:=rhs(sol); W:=rhs(sol); > PhiP0:= proc (N,t,Y,YP); > YP:=A*Gamma*Y*(L(t)-P(t))+A*Gamma*P(t)*(Y-Y)-A*Omega*Y*P(t)-A*Omega*W(t)*Y; > YP:=-Y; > YP:=Y; > end: > icP0:= array([1,0,0]): > dvarsP0:= [PP0(t),LP0(t),WP0(t)]: > dsolP0:= dsolve (numeric, number=3, procedure=PhiP0, start=0, initial=icP0, procvars=dvarsP0, abserr=1e-15,output=listprocedure); > PhiGamma:= proc (N,t,Y,YP); > YP:=A*P(t)*(L(t)-P(t))+A*Gamma*Y*(L(t)-P(t))+A*Gamma*P(t)*(Y-Y)-A*Omega*Y*P(t)-A*Omega*W(t)*Y; > YP:=-Y; > YP:=Y; > end: > icGamma:= array([0,0,0]): > dvarsGamma:= [PGamma(t),LGamma(t),WGamma(t)]: > dsolGamma:= dsolve (numeric, number=3, procedure=PhiGamma, start=0, initial=icGamma, abserr=1e-15,procvars=dvarsGamma,output=listprocedure); > PhiOmega:= proc (N,t,Y,YP); > YP:=A*Gamma*Y*(L(t)-P(t))+A*Gamma*P(t)*(Y-Y)-A*W(t)*P(t)-A*Omega*Y*P(t)-A*Omega*W(t)*Y; > YP:=-Y; > YP:=Y; > end: > icOmega:= array([0,0,0]): > dvarsOmega:= [POmega(t),LOmega(t),WOmega(t)]: > dsolOmega:= dsolve (numeric, number=3, procedure=PhiOmega, start=0, initial=icOmega, procvars=dvarsOmega, abserr=1e-15,output=listprocedure); > Philambda:= proc (N,t,Y,YP); > YP:=A*Gamma*Y*(L(t)-P(t))+A*Gamma*P(t)*(Y-Y)-A*Omega*Y*P(t)-A*Omega*W(t)*Y; > YP:=-Y; > YP:=Y; > end: > iclambda:= array([0,B,0]): > dvarslambda:= [Plambda(t),Llamda(t),WLambda(t)]: > dsollambda:= dsolve (numeric, number=3, procedure=Philambda, start=0, initial=iclambda, procvars=dvarslambda, abserr=1e-15,output=listprocedure); > Pop:=[0.333333,0.333333,0.166667,0.666667,0.166667,0.166667,1.833333,1.166667,1.833333,2.5,12.5,27.66667,60.66667,45,81.5,99.833333,23.66667,81.16667,59,48.83333]; > time:=[30,90,150,180,210,240,270,330,360,390,420,450,480,510,540,570,600,630,660,690]; > PP0:=rhs(dsolP0); > PGamma:=rhs(dsolGamma); > POmega:=rhs(dsolOmega); > Plambda:=rhs(dsollambda); > a:=sum((P(time[i])-Pop[i])*PP0(time[i]),i=1..20); > b:=sum((P(time[i])-Pop[i])*PGamma(time[i]),i=1..20); > c:=sum((P(time[i])-Pop[i])*POmega(time[i]),i=1..20); > d:=sum((P(time[i])-Pop[i])*Plambda(time[i]),i=1..20); > Vector[column](1..4,[a,b,c,d]); > end proc; > Notice that this routine takes as input the parameter values for a system of differential equations. The objective is to compute the gradient of the square of the differences between the values predicted by the differential eq. system, and experimental data. The gradient is taken with respect to the parameter values. There is a lot more to what I am doing, but this is the first place I noticed differences between maple 11 and Maple 13. I used as input: [0.03628943868909384264412104] [ 7.662243093848743041881986 ] [ 8.035227714257298233205133 ] [ 2.024474547998493950062639 ] The output I received running maple 13 on both Kubuntu Linux and the Windows 7 release candidate was: [ -0.476619191679594382467 ] [-0.9965175018113306635662 ] [-0.50008152041009398628764] [-0.9889420779552149910094 ] The output I received on Maple 11 running on Windows XP was: [-0.500658039487016739590] [-1.0000141405193850846412] [-0.49999905745147104826457] [-1.0009082015824584492414] I used the same file in all cases; I just moved it between the different computers. Thank you very much for any help or thoughts you can offer!

## Differences...

Thank you for your response! I do not think it can be as unstable as these results would suggest but I am going to experiment with it. The total routine is very large(or at least, it seems big to me) but I will post a representative sample. Would this phenomena be peculiar to Linux or might I expect it in different versions of Windows? XP and Vista seem to give the same answers, for example. Thanks again!
 Page 1 of 1
﻿