rcorless

665 Reputation

12 Badges

4 years, 110 days

Social Networks and Content at Maplesoft.com

Editor-in-Chief of Maple Transactions (www.mapletransactions.org), longtime Maple user (1st use 1981, before Maple was even released). Most obscure piece of the library that I wrote? Probably `convert/MatrixPolynomialObject` which is called by LinearAlgebra[CompanionMatrix] to compute linearizations of matrix polynomials in several different bases. Do not look at the code. Seriously. Do not look. You have been warned.

MaplePrimes Activity


These are answers submitted by rcorless

Maple can do these integrals numerically, if you specify what A, B, and C are first.  In fact, you can remove one of the constants, so long as it's not zero, by dividing through and doing 1/A*Int( 1/(1 + beta*x + gam*x^(n+1)), x=0..X) where beta = B/A and gam=C/A.  You will have to specify X numerically as well.  If you want the answer for a lot of values of X then you are better off solving the original differential equation numerically (again you will have to specify the parameters).  Some of your equations will have movable poles (that is, be singular at certain points depending on the initial conditions)

eval(ODE, [A = 1, B = 2, C = 1/2, n = 1/2]);
nsol := dsolve({diff(x(t), t) = x(t)^(3/2) + 2*x(t) + 1/2, x(0) = 0}, x(t), numeric);
plots[odeplot](nsol, [t, x(t)]);

That detects a singularity at about 1.7589.  It's correct, that ODE does have a singularity there.  

It might be possible to get an answer in terms of hypergeometric functions by expanding 1/(A + B*x + C*x^(n+1)) in formal power series, integrating, and then summing.  But then your answer would be expressed as the solution to an equation containing hypergeometric functions, and so again you might be better off solving the original DE numerically.

One way to do this is to solve all your differential equations at once, numerically. simultaneousode.mw

I just randomly assigned initial conditions to P, Q, and R, and randomly chose a range to integrate over.  Change them to be what you want.

-r

The RegularChains package makes short work of these equations.  The following commands finish in seconds on my machine.  The solution should be complete and correct (and just the real solutions).


with(RegularChains);
sys := equations union restrictions;
SuggestVariableOrder(sys);
R := PolynomialRing(%);
dec := RealTriangularize(sys, R);
Display(dec, R);
 

Learning to interpret the output will take some work, though.  It appears to me that your equations have an infinite number of solutions (a nontrivial component), from the output as displayed.

Polynomial roots are continuous functions of the coefficients.  I think that's Ostrowski's theorem, but in any case they are.  When the roots are multiple, they are not Lipschitz continuous, just Holder continuous.

One way to display how polynomial roots vary as a coefficient varies is to use the routine algcurves:-plot_real_curve.


A := Matrix(3, 3, [[0, sqrt(2)*t*I, 0], [sqrt(2)*t*I, 4, t*I], [0, t*I, 9]]);
with(LinearAlgebra);
p := CharacteristicPolynomial(A, lambda);
algcurves:-plot_real_curve(p, lambda, t);


To get formulas for the curves, you can use the Davidenko equation.

Your equation is not zero-dimensional.  Given z, there is a rational surface for y, namely y = (1-x^{m+1}z)/(1-x^{n+1}z).

In your worksheet you break x and y up into real and imaginary parts, but you do not break z up into its real and imaginary parts.  This may be a further source of confusion.

I did not see the code that you were using, although the tags say that you are using dsolve,numeric (which is what I would have guessed.)

This is best explained by example.  Suppose I am solving the differential equation y' = x^2 + y^2, with initial condition y(0) = 1.  You can show that on 0 <= x <= 1 that the solution of this equation lies between u(x) which satisfies u' = 1 + u^2 and v(x) which satisfies v'(x) = v^2, with u(0) = 1 and v(0)=1 as well.  Both of those equations have exact solutions: u(x) = tan(x+Pi/4) and v(x) = 1/(1-x).  Both are singular on 0 <= x <= 1.  Therefore y(x) must also be singular.  (In fact Maple can solve that equation in terms of Airy functions or Bessel functions but we don't really need that here).

If you want to solve y' = x^2 + y^2 numerically on the interval 0 <= x <= 1, then dsolve,numeric will try its best, and its stepsize heuristics will (quite accurately) find the singularity.  It will faithfully give you the solution as far as it can be computed.  But it will also warn you that there might be a singularity.  (There is).

Does this help? Here is code that does this in Maple.  dsolvequestion.mw

See ?simplify,siderels for details.

In this particular case it doesn't appear to do much, however: the "simplification" may only be technical.

But perhaps it will be useful for your larger examples.

If you want to remove a particular variable using your side condition, you can use "resultant".

resultant(A, a01*(a20*b20-a30*b10)+a11*a20*b11, a01) returns a factored polynomial in all the variables except a01, for instance.

Be warned that this can frequently increase the size of the equations...

First, your system of equations has both a Va(t) and a Va(T), which I assume is a typo.  Also, you use literal fractions instead of derivatives (such as diff( Va(t), t ) ) and so you won't be able to use dsolve once you have chosen your parameters, as a check.

I changed your Va(T) to a Va(t).  Maybe this was not what you intended.  Anyway, once you do that, you can find the equilibrium solutions by setting all derivatives to zero and assuming that all your variables are at equilibrium; this means replacing (e.g) Va(t) by some symbol for the equilibrium solution.  I chose the symbols with a capital E on the end: E for equilibrium.

eval(%, {Cb(t) = CbE, Df(t) = DfE, Fn(t) = FnE, Rs(t) = RsE, Sc(t) = ScE, Va(t) = VaE})

The resulting system of *polynomial* equations in these five unknowns can be solved by using Groebner:-Basis and Groebner:-NormalSet.  Your system of equations is apparently zero-dimensional, so for each choice of the parameters you have only a finite number of solutions; because the normal set has five elements, there will be (for each choice of parameters) five solutions.

You can see my old paper https://dl.acm.org/doi/10.1145/242961.242968 for more details of this method.

This analysis does not work out special cases (Maple used to allow one to do it, but now it doesn't).  

You could also just try "solve" but that is somehow a bit less controlled and controllable; and the answers can be enormously large, dauntingly so.  Stopping at an eigenvalue problem usually keeps things under better control and results in more tractable answers.

I did not carry the solution further because I don't know if your T versus t was really a typo.

For the classical stability analysis, you need to compute the Jacobian matrix, evaluate it at the equilibrium in question, and decide if all its eigenvalues are in the left half plane or not.

(There's more to it than that, but that's the general outline that most people follow, at least after their first course in this stuff.)

THEN you really need to check if the computation is correct or not: set up the differential equations with those parameter values, start at some initial point near to the equilibrium, and see if the solution tends to the equilibrium or not (if all eigenvalues have negative real part, and you start "close enough", this should happen.  But sometimes it doesn't and sometimes it doesn't for interesting reasons in the modelling context).  At least with dimension only 5 you won't have to worry much about pseudospectra.

-r

Equilibrium._solutions.mwEquilibrium._solutions.mw

My preferred method for this is to create eigenvalue problems.  A tdeg basis works just fine for zero-dimensional problems (those with a finite number of isolated solutions).

If you had supplied a worksheet instead of code to cut and paste and remove the output that would have been easier, but I will try to use your example.

Ah.  Your example is not zero-dimensional.  This makes everything more complicated.  And it has multiple roots.  Which again makes things more complicated.  

So.  What you can do is to take a random projection---a line---which will generically intersect the nontrivial component in isolated points.  Then the resulting Groebner basis can be solved by the multiplication matrix (normal set) method.  I attach a worksheet showing that this can be done, but I stopped short of investigating the multiple roots.  The solution I find does not match your constraints, but you will have to vary the random projection in order to find all the points on the nontrivial component.

Actually the problem is small enough that I wound up computing and factoring the characteristic polynomial, which suggests that the lex order basis method will work with a random projection added.  The multiplication matrix method is more powerful, but you have to be comfortable with eigenvectors (and in this case eigenspaces of eigenvalues with nontrivial multiplicity).

"If this was easy, we'd all be doing something else."

-r

primesgroebnerexample.mw

I have made some changes to your document, absolutely guessing at what it is that you want.  0_one.mw

With a little luck, you will be able to read my changes and correct my guesses.

The main change I made is to replace things like k(n) := exp(2*Pi*n/L) (or whatever it was) with the functional form k := n -> exp(2*Pi*n/L) (or whatever it was).  This is more reliable.

Twice in the worksheet you used = when I suspect you wanted to use assignment.

If you had correctly used := in the second instance, you would have overwritten your definition of f(t), which I can't believe is what you wanted.

So have a look at what I have done, and see if this helps you to get towards what you really want.

-r

PS I did add some plot commands, again by guessing what it was that you wanted.

 

I have tried to upload the worksheet a few times now but I don't see that it has uploaded.  Fine.

 

Anyway I get that the equation is NOT what is shown in the image, but rather

 

You ask "how to modify the given code to solve a boundary-value problem".  One modification is to throw it all away and replace it with a call to dsolve.  But that's probably not what you want (I give a screenshot of that "solution" below, in case I am wrong).

Another way to modify your code is to write your own numerical boundary-value problem solver using (for instance) quasilinearization (your problem is a second order nonlinear boundary-value problem, and the process of quasilinearization replaces nonlinear problems with a sequence of linear ones --- see eg https://en.wikipedia.org/wiki/Draft:Quasilinearization for a description).  Is that what you want to do?  One purpose of such a thing is to learn how BVP solvers work.

Your code looks as if it might be a fixed stepsize finite-difference approach (although because you have no comments I am not sure, and I am not going to try very hard to understand it).  Such methods are suboptimal, but perhaps an ok starting point if you are trying to learn how these things work.  Another bad method for solving BVP that starts from an initial-value problem method is the "shooting" method (ok it can be not-bad sometimes, and for this problem it's probably ok).  What you do is to set up a nonlinear equation where the input is the slope y'(0) and the output is the difference y(1)+y'(1)-2*exp(1), and you call "fsolve" on that nonlinear equation; this solves for the initial slope you need (the angle to "shoot" at) in order to hit the target at the other side.

Is that what you want?  Are you trying to learn how "shooting" methods work?

 

The statement k:=0 assigns an integer to the variable k. 

Then your "for" loop uses the variable k as an "indexed name" (to use old Maple terminology).  But k[1] is interpreted as 0[1] which doesn't make sense because you can't index a constant.

The cure is to use different names for different things.  Maybe use n := 0 instead of k := 0, and then replace all instances of the variable "k" in your loop with "n", while keeping k[1] as k[1].  

(I am assuming you are using k[1] because you want also to use k[2], k[3], k[4], etc for some other reason.  If that's not true, then you are better off just using k or k1 or some other non-indexed name)

My own personal convention is to use upper case letters for the limits of loops.

N := 10;

for k to N do 

  blah blah blah with both k and N

end do;

Using i,j,k, and sometimes ell (I *really* dislike using the single letter l because it looks so much like I or 1) for loop variables helps me to keep from introducing bugs.  Nothing's perfect, of course.

 

There are many ways you can do this, but one simple way is to use the Array data structure.  In this picture I have converted lists of numbers into Arrays and divided them all at once.  Is this what you wanted?  Your data looks like strings, though, so you may need to convert them to numbers first.

 

As a teacher of numerical analysis I have seen many times that students will use variable names like xk when they really want a vector of values (in LaTeX subscript notation $x_0$, $x_1$, $x_2$, and so on, in Maple notation x[0], x[1], x[2], and so on) which the instructor mistakenly believes that the students understand when the instructor writes xon the board, possibly even saying afterwards k=0, 1, 2, ... .

Then there's also xdot, meaning the value of the derivative dx/dt at time t.  In LaTeX this is $\dot x(t)$.  In Maple thanks to Edgardo (I think) there is a way to do this which I always forget but I will go look up now; Ah yes it's in https://www.mapleprimes.com/questions/132963-How-To-Use-Dot-time-Derivative .  Actually I think there was some news about that in a recent release---will look.

Then your equations look like a simple forward difference for the time derivative and for the second time derivative, and your final equation looks like a differential equation being approximated by a finite difference. And indeed it looks like a very familiar equation from my good old vibration days: m x'' + c x' + x = F where m is mass, c is damping, and the stiffness is k (you probably meant K, to make a different notation to the stepping index variable k), and the forcing function is F. ( $ m \ddot x(t) + c\dot x(t) + K x(t) = F(t)$)  

So, it looks like what you REALLY want is to solve that vibration differential equation given the mass, damping, stiffness and forcing.

There are a lot of easy ways to do that in Maple; you don't have to reinvent the numerical wheel.  If you actually are trying to invent a better numerical method than the existing ones, well, okay, but you might want to read something like, say, Hairer Norsett and Wanner first.  It is a fun activity, and I have done it myself.  Oscillatory problems are not so easy; but you can go a long way with standard methods.

But here is an example of solving a 2d vibration problem numerically in Maple using the built-in solver.  Just to make it fun I made the mass, damping, and stiffness time-dependent.

artificialvibrationproblem.mw

1 2 Page 1 of 2