Preben Alsholm

13728 Reputation

22 Badges

20 years, 250 days

MaplePrimes Activity


These are replies submitted by Preben Alsholm

@torabi After I remove quite a few implicit multiplication signs in your BC due to a space being interpreted by the 2D parser as multiplication so that I have:
BC := {D[1](U)(0, theta) = 0, D[1](U)(1, theta) = 0, D[1](V)(0, theta) = 0, D[1](V)(1, theta) = 0, D[1](V)(x, 0) = 0, D[1](V)(x, 1) = 0, D[1](W)(0, theta) = 0, D[1](W)(1, theta) = 0, D[1](W)(x, 0) = 0, D[1](W)(x, 1) = 0, D[2](W)(0, theta) = 0, D[2](W)(1, theta) = 0, D[2](W)(x, 1) = 0, U(0, theta) = 0, U(1, theta) = 0, U(x, 0) = 0, U(x, 1) = 0, V(0, theta) = 0, V(1, theta) = 0, V(x, 0) = 0, V(x, 1) = 0, W(0, theta) = 0, W(1, theta) = 0, W(x, 0) = 0, W(x, 1) = 0, (D[1](U))(x, 0) = 0, (D[1](U))(x, 1) = 0, (D[2](W))(x, 0) = 0}
#
then when I run
res := pdsolve({PD1, PD2, PD3}, BC, numeric);

I get


Error, (in pdsolve/numeric/process_IBCs) initial/boundary conditions can only contain derivatives which are normal to the boundary, got (D[1](U))(x, 0)

which tells you that you cannot have D[1](U)(x,0) and the similar ones. You can have e.g. D[2](U)(x,0) which is a derivative normal to the boundary theta=0.


@vv No I haven't been aware of this potential problem till now.

I noticed that it doesn't help the rosenbrock method perform any faster in the example in the link given above when being used nonprocedurely. That may be because rosenbrock needs to compute the jacobian and in doing so, subsequent expansion occurs. That is just a guess.

 

@vv Here is a procedure for isolating derivatives in a first order system of odes which tries to avoid expansion.

It works in the example given in the link in my question. The problem there was that DEtools[convertsys] expanded a very complicated right side. It also works in the two toy examples below.

IsolateDerivatives:=proc(sys::set(equation)) local nms,SYS,vars,p;
  p:=proc(u,vars) if not depends(u,vars) then freeze(u) else u end if end proc;
  nms:=indets(sys,name);
  SYS:=evalindets((lhs-rhs)~(sys),specfunc(diff),freeze);
  vars:=indets(SYS,name) minus nms;
  SYS:=map~(rcurry(p,vars),SYS);
  SYS:=solve(SYS,vars);
  thaw(SYS)
end proc:
sys1:={diff(x(t),t)=x(t)*(a-y(t)), diff(y(t),t)=-y(t)*(b-x(t))};
sys2:={2*diff(x(t),t)=x(t)*(1-y(t))-diff(y(t),t), diff(y(t),t)=-y(t)*(1-x(t))};
res1:=IsolateDerivatives(sys1);
res2:=IsolateDerivatives(sys2);
DEtools[convertsys](sys1,{},[x,y],t,Y,YP)[1]; #No expansion
DEtools[convertsys](sys2,{},[x,y],t,Y,YP)[1]; #Expansion
##Now use convertsys on res2:
DEtools[convertsys](res2,{},[x,y],t,Y,YP)[1]; #No expansion
##
## I think it is important that DEtools[convertsys] doesn't change a system in which the derivatives are already isolated.
The person isolating the derivatives may have known what he is doing.
The little evidence I have so far is that convertsys lives up to that. It does it in the example in the link too after having isolated the derivatives with (e.g.) IsolateDerivatives.

@vv Yes, freezing the right hand sides will work if we know that the unknowns are not present there.

Freezing could be extended to cover terms that don't depend on the unknowns as in this example:

restart;
eq1:=a+b=b*(c+d)+2*a+7*e;
eq2:=e*d=b*(c+d)+f*(8+k)+k*a;
solve({eq1,eq2},{a,e});
sys:=(lhs-rhs)~({eq1,eq2});
p:=proc(u) if not depends(u,{a,e}) then freeze(u) else u end if end proc;
map~(p,sys);
solve(%,{a,e});
thaw(%);
#####################
I should mention that `DEtools/convertsys` behaves not quite as solve when isolating the derivatives.
This is seen e.g. in this familiar system (Volterra-Lotka), where solve expands, but convertsys doesn't.
restart;
sys:={diff(x(t),t)=x(t)*(1-y(t)), diff(y(t),t)=-y(t)*(1-x(t))}:
DEtools[convertsys](sys,{},[x,y],t,YP):
%[1]; 
        [YP[1] = YP[1]*(1-YP[2]), YP[2] = -YP[2]*(1-YP[1])]
solve(sys,{diff(x(t),t),diff(y(t),t)});
        {diff(x(t), t) = -x(t)*y(t)+x(t), diff(y(t), t) = x(t)*y(t)-y(t)}
## Doing the same in this example shows that convertsys expands some part, where solve expands all.
sys:={2*diff(x(t),t)=x(t)*(1-y(t))-diff(y(t),t), diff(y(t),t)=-y(t)*(1-x(t))};

1. Remove the two occurrences of units.
2. Be aware that in ?solve,detail there is a section Using Assumptions, which starts out saying
  In most cases, solve ignores assumptions on the variables for which it is solving.
3. Did you really mean L__L(g__ap*Unit('m'), N__1), i.e. that L__L is a function of two variables with input (in this case):
g__ap*Unit('m') and N__1 ?

@vitato According to your link at surrey.ac.uk the denominator should be sin(theta(t)), not sin(phi(t)).

I have corrected my answer below to reflect that change.

@Carl Love I just tried giving a vote up. Now the OP has a reputation of 5.

@marc sancandi I admit to have been put off by your heavy use of sarcasm, which I think is not conducive to a sober and rational discussion.

Does the problem you have show up in an example simple enough to view on a computer screen?
If so, why not upload a worksheet not relying om an .m file?

In your reply to Tom Leslie you say that the system is huge, so I guess that for now you don't have a simpler example.
Often enough I have heard (or read) numerical analysts say that for any numerical method there exist inputs for which the method won't work.

Should we just trust your statement:

# File "MC.m" below contains an example of one of them (a second degree ODE
# reduced to a couple of 2 first order ODEs ; details about these equations
# are of no importane here).


?

@shahid Try not to assign to RE and then do:

res := dsolve(eval({bc, eq1, eq2},RE=15), numeric); #Works for me

Now you can experiment with continuation in RE or with using an approximate solution to get success for higher values of RE.
See  ?dsolve,numeric,bvp

@xhimi I make 4 (randomly chosen) matrices with size 16x16 and with complex entries (no symbolic entries). I give them shape=hermitian; then I know that the eigenvalues will be real and more importantly, Maple knows it too.
If you happen to have an hermitian matrix and the matrix is not given the shape hermitian, then there will no doubt be imaginary roundoff errors in the eigenvalues. Those can be removed by the use of fnormal and then simplify. For an example see the bottom.
restart;
with(LinearAlgebra):
n:=16:
p:=RandomTools:-Generate(complex(float),makeproc);
A:=RandomMatrix(n,generator=p,shape=hermitian);
B:=RandomMatrix(n,generator=p,shape=hermitian);
C:=RandomMatrix(n,generator=p,shape=hermitian);
D1:=RandomMatrix(n,generator=p,shape=hermitian);
L:=[A,B,C,D1];
LE:=[Eigenvectors]~(L):
LEV:=map2(op,2,LE);
LEV[1]; #Matrix of eigenvectors of A
seq(LEV[1][..,i],i=1..n); #Eigenvectors for A
m:=nops(L);
res:=IntersectionBasis([seq([seq(LEV[j][..,i],i=1..n)],j=1..m)]);
interface(rtablesize=16);
res;
LE[1][1]; #Eigenvalues for matrix A
##############################
D2:=Matrix(D1): #Making a new matrix with the same elements as D1, but with default options
MatrixOptions(D2);
ev:=Eigenvectors(D2):
ev[1]; #Vector of eigenvalues
simplify(fnormal(ev[1])); #Stripping roundoff errors



@shahid You have a new equation eq1.
Secondly, you cannot use [] instead of parentheses ().

Change that before attempting again. You may have to work harder after that.

@xhimi To see matrices or vectors with dimensions higher than 10 do:
interface( rtablesize= 16); #
to set it at 16. By using
interface( rtablesize= infinity);
all sizes will be shown.
###
You say that your code stops at m := nops(L);
The answer obviously should be 4 in your case.
##
What type of elements do your matrices have? If they have datatype=float there ought not be any problem.
If on the other hand your entries have symbolic values like names or exact expressions like sqrt(2) or sin(1) then you can easily run into problems.
Remember that to find the eigenvalues (and the eigenvectors) you have to find the roots of a polynomial of degree 16, which is impossible in general except numerically.

@Carl Love Well, you take the real part. The real part does become zero:

evalc(ex1(x,t,z));
          -1.132*10^11*exp(9.9*10^6*x)*cos(-1.95*10^6*z+2.98*10^15*t)

Thus it is zero where cos(-1.95*10^6*z+2.98*10^15*t)=0, i.e. for
-1.95000000*10^6*z+2.980000000*10^15*t = Pi/2+p*Pi, p any integer and for any x.

## Getting rid of large numbers makes this less confusing:
ex1:= (x,t,z)-> Re(-exp(x + I*(z-t)));
plots:-implicitplot3d(
     ex1(x,t,z), x= -10..0, t= 0..10, z= 0..10,
     axes= boxed, style= patchcontour, scaling= constrained, shading= z,
     grid= [20$3]
);

@xhimi Well, here is a way. I still make the matrices small so that results are easy to understand.

restart;
with(LinearAlgebra):
n:=5: #Using nxn matrices
## Taking just 3 matrices:
A:=RandomMatrix(n,datatype=float);
B:=RandomMatrix(n,datatype=float);
C:=RandomMatrix(n,datatype=float);
L:=[A,B,C]; #Putting the matrices in a list
LE:=[Eigenvectors]~(L): #Finding vectors of eigenvalues and corresponding matrices whose columns are eigenvectors
LE[1]; #Result for A
LEV:=map2(op,2,LE); #Leaving out the vectors of eigenvalues
LEV[1]; #Matrix of eigenvectors of A
seq(LEV[1][..,i],i=1..n); #Eigenvectors for A
m:=nops(L); #Number of matrices
IntersectionBasis([seq([seq(LEV[j][..,i],i=1..n)],j=1..m)]);



First 86 87 88 89 90 91 92 Last Page 88 of 230