Carl Love

Carl Love

26658 Reputation

25 Badges

11 years, 229 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Hypergeometric functions are probably not what you'd consider "ordinary", but the following is the best that I can do with this. It might be possible to convert this answer to another function. To make any progress with this problem in Maple, it is necessary to make an assumption such as assuming f > 1. An additional assumption t > 1 will clean up the signums in the answer.

dsolve(D(y)(t) = sqrt((f-1)/(t^f-t))) assuming f>1, t>1;

Your 16 equations can be divided into four independent sets of equations with four variables each. The first set is

{a1*a3 = x1111, a1*a4= x1112, a2*a3 = x1211, a2*a4= x1212}.

The other three sets are completely analogous. It is almost obvious that there is no solution with the x's completely arbitrary because we must have x1212*x1111 = x1112*x1211 since both sides equal a1*a2*a3*a4.

Thank you for posting this very interesting type of ODE. I am not sure of the exact definition of "well-posed", but this problem is solvable.

I hope that the code below is self-explanatory. If there's anything that you (or any reader) does not understand, please ask.

Numeric solution of a BVP with an interior-point condition.

4 July 2013

Here we solve the BVP  

     u'' + 3.4/(2*u(0.25) + u(0.16) +  2) + u*u' = 0, u(0) = 0.1, u(0.5) = 1.

restart:

We replace the interior-point condition with a constant: _C = 2*u(0.25) + u(0.16).

ode:= diff(u(t),t$2) + 3.4/(_C+2) + u(t)*diff(u(t),t) = 0:

BCs:= u(0)=0.1, u(0.5)=1:

We create a procedure whose input is a numeric value of that constant. The procedure calls dsolve using that value, evaluates the interior-point condition, and returns the difference between the input value and the evaluated condition. Hence, the goal is to find the input value that makes the output value zero.

F:= proc(C)
global u, t, _C, ode, BCs;
local Sol:= dsolve({eval(ode, _C= C), BCs}, [u(t)], numeric);
     userinfo(1, {F}, sprintf("Trying C = %22.15f", C));
     C - (2*eval(u(t), Sol(.25)) + eval(u(t), Sol(.16)))
end proc:
     

infolevel[F]:= 1:

C:= fsolve(F);

F: Trying C =      0.000000000000000

F: Trying C =      0.100000000000000
F: Trying C =     -0.010000000000000
F: Trying C =      1.608509256564673
F: Trying C =      1.664385631806343
F: Trying C =      1.663643097563830
F: Trying C =      1.663643320150873

F: Trying C =      1.663643320150873
F: Trying C =      1.663643320150889

1.66364332015089

Sol:= dsolve({eval(ode, _C= C), BCs}, [u(t)], numeric):

Verify solution:

C - (2*eval(u(t), Sol(.25)) + eval(u(t), Sol(.16)));

HFloat(1.27675647831893e-15)

The error is far below the error tolerances of the dsolve!

plots:-odeplot(Sol, [t,u(t)], t= 0..0.5);

 



Download BVP_fsolve.mw

I believe the solution below to be more computationally stable. It uses fsolve to avoid the issue of multiple complex solutions returned by solve. For the derivative of the implicit function, I used the standard formula for that derivative in terms of the partial derivatives of the implict equation: dy/dx = - (dF/dx) / (dF/dy) where F(y,x) = 0.

restart:

V[tot]:= V[0] + V__b + V[ind]:
eq:= H + C[b]*V__b/V[tot]
     = 10^(-14)/H +
       C[0]*V[0]/(V[tot]*(1+H/Ka[1])) +
       C[ind]*V[ind]/(V[tot]*(1+H/Ka[ind])
);

H+C[b]*V__b/(V[0]+V__b+V[ind]) = (1/100000000000000)/H+C[0]*V[0]/((V[0]+V__b+V[ind])*(1+H/Ka[1]))+C[ind]*V[ind]/((V[0]+V__b+V[ind])*(1+H/Ka[ind]))

H:=       10^(-pH):
C[0]:=    10^(-2):
C[ind]:=  83/100:
V[ind]:=  10^(-2):
V[0]:=    10:
Ka[1]:=   10^(-48/10):
Ka[2]:=   10^(-9):
C[b]:=    10^(-2):
Ka[ind]:= 10^(-8):

F:= unapply((lhs-rhs)(eq), (pH,V__b)):

f:= V__b-> fsolve(F(pH,V__b), pH):

plotopts:=
     numpoints= 400,
     thickness= 2,
     gridlines,
     tickmarks= [spacing(2), default]
:

plot(f, 0..40, plotopts);

`f'`:= (-D[2]/D[1])(F) @ (V__b-> (f(V__b),V__b)):

plot(`f'`, 0..40, plotopts);

Zoom in on the plot of that critical region.

plot(`f'`, 9.9..10.9);

 

 

Download pH.mw

Here's a substantially different approach to parallelizing the code. This one does show a benefit to parallelizing, provided that you can accept the solution divided into several chunks. Being divided into chunks might be beneficial for subsequent parallelizing.

The trick in this technique is to divide the side vectors into equal-sized (or nearly equal-sized) chunks such that there is one chunk per processor. Then we use Threads:-Seq with the [tasksize= 1] option.

Parallelizing code for solving matrix equation AX=B  (over a finite field with A upper triangular) by splitting B into one piece for each CPU.

restart:

The float[8] technique used in this worksheet is extremely fast and is limited to moduli less than 2^25. If you need a larger modulus, it is very easy to modify (let me know), and  just a little slower.

N:= 2^7:  #order of matrix A
n:= 2^14: #number of vectors b[i] (number of columns of B)
p:= 127:  #prime modulus

isprime(p);

true

Generate a random example upper triangular nonsingular A, convert to shape= rectangular (required by LinearAlgebra:-Modular), and input to Modular.

A:= LinearAlgebra:-RandomMatrix(
     N, N,
     shape= triangular[upper, unit],
     datatype= float[8]
):

A:= Matrix(A, shape= rectangular):

A:= LinearAlgebra:-Modular:-Mod(p, A, float[8], C_order):

Modular works much faster (two to three times faster) in C_order rather than Fortran_order.

 

Generate a list of random side vectors b[k].

b:= [seq](
     LinearAlgebra:-RandomVector(
          N, datatype= float[8]
     ), k= 1..n
):

We assume that all the side vectors are stored in a single Matrix to start with.

B:= `<|>`(b[]):

B:= CodeTools:-Usage(
     LinearAlgebra:-Modular:-Mod(p, B, float[8], C_order)
):

memory used=16.00MiB, alloc change=16.00MiB, cpu time=78.00ms, real time=78.00ms

 

This solving procedure is used for all solutions.

Solve:= proc(B::Matrix, A::Matrix, p::posint)
     #Command below operates on B inplace.
     LinearAlgebra:-Modular:-BackwardSubstitute(p, A, B);
     B
end proc:

 

Get the number of CPUs so that we can compute the size of the pieces into which to divvy up the side vectors such that there is one piece per CPU.

NC:= kernelopts(numcpus);

8

 

The method is to divide B into NC equal-sized pieces (or as close to equal as possible). First, I will use the method in sequential (i.e., not parallel) code, just for comparison purposes.

Xs:= CodeTools:-Usage(
     [seq](
          Solve(
               B[.., trunc((k-1)*n/NC)+1..trunc(k*n/NC)],
               A, p
          ), k= 1..NC
     )   
):

memory used=16.01MiB, alloc change=40.03MiB, cpu time=218.00ms, real time=203.00ms

 

To make the above run in parallel, only one change is needed: Change seq to Threads:-Seq[tasksize= 1].

Xs:= CodeTools:-Usage(
     [Threads:-Seq[tasksize= 1]](
          Solve(
               B[.., trunc((k-1)*n/NC)+1..trunc(k*n/NC)],
               A, p
          ), k= 1..NC
     )   
):

memory used=16.05MiB, alloc change=31.34MiB, cpu time=672.00ms, real time=94.00ms

The above solution is returned as a list of NC matrices. If we put them together into a single matrix, then the time required eliminates any benefit obtained by parallelizing. However, having the matrices already divided into NC pieces might be useful in parallelizing the next step, whatever that is.

X:= CodeTools:-Usage(`<|>`(Xs[])):

memory used=48.00MiB, alloc change=16.00MiB, cpu time=360.00ms, real time=360.00ms

 

Verify results (not really necessary). Verify A.X - B = 0.

ArrayTools:-IsZero(
     LinearAlgebra:-Modular:-AddMultiple(
          p,
          -1 mod p,
          LinearAlgebra:-Modular:-Multiply(p, A, `<|>`(Xs[])),
          LinearAlgebra:-Modular:-Mod(p, B, float[8])
     )
);

true

Plain solve of the whole matrix at once. This is the time to beat. (Note that the command below overwrites B.)

X:= CodeTools:-Usage(Solve(B, A, p)):

memory used=0.55KiB, alloc change=0 bytes, cpu time=171.00ms, real time=172.00ms

So there is some benefit to parallelizing if we accept having the solution divided into several matrices.

 

Download ParallelModular2.mw

I got a partial analytic solution and a numeric solution. There is something that I don't inderstand about the analytic solution. My question is in the worksheet. Hopefully someone else will answer this.


NULL

restart:

U:= u(x,t):

PDEtools:-declare(U);

u(x, t)*`will now be displayed as`*u

pde:= diff(U,t) + U*diff(U,x) - diff(U, x$2) = 0;

diff(u(x, t), t)+u(x, t)*(diff(u(x, t), x))-(diff(diff(u(x, t), x), x)) = 0

I changed your 0.1 to 1/10. I was getting an error message with the 0.1.

bc:= u(-9,t) = 2, u(9,t) = -2, u(x, 1/10) = -2*sinh(x)/(cosh(x)-exp(-1/10));

u(-9, t) = 2, u(9, t) = -2, u(x, 1/10) = -2*sinh(x)/(cosh(x)-exp(-1/10))

We can get a solution if we omit the boundary comditions.

solA:= pdsolve(pde, U);

u(x, t) = -2*_C2*tanh(_C2*x+_C3*t+_C1)-_C3/_C2

However, I don't understand why that solution does not have arbitrary univariate functions. I don't whether that's a bug or whether pdsolve is "supposed to" return solutions like that sometimes.

 

Verify the above solution:

pdetest(solA, pde);

0

There's no solution returned if we include the boundary conditions:

pdsolve({pde, bc}, U);

 

We can include the boundary conditions and get a  numeric solution.

solN:= pdsolve({pde}, {bc}, U, numeric, spacestep= 0.05, timestep= 0.01);

module () local INFO; export plot, plot3d, animate, value, settings; option `Copyright (c) 2001 by Waterloo Maple Inc. All rights reserved.`; end module

solN:-plot(x= -3..3, t= 3);

By overlaying plots (not done here), we can see that as t -> infinity, u -> -2*tanh(x), which corresponds with the symbolic solution.

solN:-animate(x= -3..3, t= 1/10..1, numpoints= 2^8, frames= 2^4);

 

``


Download numPDEsol.mw

Your first version of eqd has a coefficient of omega0^2 on the y(t) term. The coefficient is omitted in the second version. If I include the coefficient in the second version, then I get the same phase portrait, the nearly concentric circles.

plot([B(t), A(t), t= t1..t2]);

where t1..t2 is some definite range of t-values.

For your first case, the derivative of f(x,y) with respect to t is, of course, 0. Perhaps you mean Diff(f(x,y,t), t, t)?

Your second case is a partial differential equation because the unknown function has more than one independent variable. Consequently, the command is pdsolve rather than dsolve.

pdsolve((Diff(f(x,y), (x,y), (x,y)))*x+(Diff(f(x,y), (x,y)))*y = 0, f(x,y));

In Kitonum's answer, take the line with the map command, and change the last character of that line from a colon (:) to a semicolon (;). Then you will see the output.

You could also do f(0.5), f(0.9), etc.

I wasn't chastising you for bad vocabulary. I truly didn't understand which alternative you meant. But Kitonum figured it out.

When you assigned sysdiff and fcns, you used curly braces ({ }). In your dsolve command you used them again. So the first argument is now a set containing a set rather than the set containing equations that is expected. So, changing the dsolve command to dsolve(sysdiff, fcns) will produce a solution. But like your other systems from today, the solution contains spurious constants of integration. So once again, make the command

dsolve(sysdiff, fcns, method= laplace);

This is very similar to an earlier problem you had from today where Preben suggested adding method= laplace to the dsolve arguments. That works here also, and it is the only thing that you need to add to get the plots. I don't know why it works. The reason that the result from the plain dsolve would not plot was that there were still integration constants left in the solution. (Should it be considered a bug?)

Remember, if you use dsolve(..., numeric) then you should use plots:-odeplot for the plotting (it is possible to use plot also with numeric, but it requires a different syntax).

It's essentially the same situation as with your last question. The difference is that this is solve rather than dsolve. You assign the result of the solving command to a variable. In this case, you used solution as the variable. Then my recommended technique is to access the contents of solution with the eval. To get the plot in this case, try

plot(eval(C2, solution), omega= 0..2, -3..3);

You wrote: How to get all solution?

I would think that with the huge number of similar Questions that you've asked (and have been Answered) over the past year that you'd be able to figure that out on your own by now.

I think that it is silly to ask for all solutions to this problem because there are a huge number, and most subsets of four points are a solution.

Nonetheless, all the solutions can be obtained with this code, which takes more than 30 minutes to run:

Sols:= table():
for P4 in combinat:-choose(L,4) do
     ct:= 0; #count non rt triangles for this group of 4 pts.
     for P3 in combinat:-choose(P4,3) do
          geom3d:-triangle(
               ABC,
               [seq](geom3d:-point(A||k, P3[k][]), k= 1..3)
          );
          if geom3d:-IsRightTriangle(ABC) then  break  end if;
          ct:= ct+1
     end do;
     # 4 triangles can be made from 4 pts: C(4,3) = 4.
     if ct = 4 then  Sols[P4]:= [][]  end if
end do:

Sols:= {indices}(Sols, nolist):
nops(Sols);

                             276392

A better technique is to use a Monte Carlo method to approximate the number of solutions. We use combinat:-randcomb to repeatedly select four points at random.

N:= 2^10:   #number of random trials
sol_ct:= 0: #number of solutions found
for k to N do
     P4:= combinat:-randcomb(L, 4);
     ct:= 0; #count non rt triangles for this group of 4 pts.
     for P3 in combinat:-choose(P4,3) do
          geom3d:-triangle(
               ABC,
               [seq](geom3d:-point(A||k, P3[k][]), k= 1..3)
          );
          if geom3d:-IsRightTriangle(ABC) then  break  end if;
          ct:= ct+1
     end do;
     # 4 triangles can be made from 4 pts: C(4,3) = 4.
     if ct = 4 then  sol_ct:= sol_ct+1  end if
end do:

trunc(evalf(sol_ct/N)*binomial(nops(L), 4));


                             278336

There are two ways: You can use dsolve(..., numeric) and plots:-odeplot, or you can use an analytic dsolve solution with a parametric plot.

restart:

sys:= diff(x1(t), t$2) = f, diff(x2(t), t$2) = g:

fcns:= x1(t), x2(t):

ICs:= x1(0) = -1, D(x1)(0) = 0, x2(0) = 0, D(x2)(0) = 0:

f:= -k1*x1(t) + k2*(x2(t)-x1(t)):

g:= -k2*(x2(t)-x1(t)) - k3*x2(t):

k||(1..3):= 4, .8, 4:

Purely numeric solution allows you to use odeplot.

Sol_N:= dsolve({sys,ICs}, {fcns}, numeric):

plots:-odeplot(Sol_N, [D(x1)(t), x1(t)], t= -10..10, numpoints= 2^10);

And do likewise for the other plot that you want.

 

We can get the same plots with the purely analytic solution.

Sol_A:=  dsolve({sys, ICs}, {fcns});

{x1(t) = -(1/2)*cos(2*t)-(1/2)*cos((2/5)*35^(1/2)*t), x2(t) = -(1/2)*cos(2*t)+(1/2)*cos((2/5)*35^(1/2)*t)}

There's no need to use evalf. I would like to discourage you from using assign. Usually, eval is easier to work with.

plot(eval([diff(x1(t), t), x1(t), t= -10..10], Sol_A), numpoints= 2^10);

 


Download phaseportrait.mw

First 351 352 353 354 355 356 357 Last Page 353 of 384