Dr. Patrick T

2083 Reputation

18 Badges

13 years, 259 days

MaplePrimes Activity

These are answers submitted by PatrickT

There is a step-by-step analysis of a solution to a second order ode (bvp) in this user-written demo, showing various numerical algorithms that may be used to arrive at a numerical solution --  maybe you want something like that?

solving a simpler problem:

simplify(sum((1/(1+2/a))^(2*k+1)/(2*k+1),k = 0 .. infinity)) assuming a::posint;

                            1/2 ln(a + 1)

or up to n instead of infinity:

I hope this wasn't some sort of homework we're not supposed to answer -- anyways, I haven't answered the original question.

The Help menu is really great, do a keyword search and navigate. I just did and hit LagrangeMultipliers. Here's the description. Read on, there are examples on how to use the command.


Student[MultivariateCalculus][LagrangeMultipliers] - solve types of optimization problems using the method of Lagrange multipliers

Calling Sequence
     LagrangeMultipliers(f(x,y,..), [g(x,y,..), h(x,y,..),..], [x,y,..], opts)

     f(x,y,..)                 - algebraic expression; objective function
     [g(x,y,..), h(x,y,..),..] - algebraic expression; constraint functions, assumed equal to 0
     [x,y,..]                  - list of names; independent variables
     opts                      - (optional) equation(s) of the form option=value where option is one of constraintoptions, levelcurveoptions, pointoptions, output, showconstraints, showlevelcurves, showpoints, title, or view; output options

sorry if I have offended you Axel, I hadn't noticed it was homework, I just thought "ah for once a question I can answer" and spat out an answer.

phi := ( 1 + sqrt(5) ) / 2:
f := n -> ( phi^n - (1-phi)^n ) / sqrt(5):
limit ( f(n-1) / f(n+2) , n = infinity );

                               2 + 5

others will no doubt be able to answer your question directly, but in the meantime if you are certain that the answer is real you can extract the real part with Re(). The complex part might have appeared as a result of imprecision in the numerical algorithms (is the imaginary part very small? if so, that's maybe what's going on)

if your system has a stable or center manifold of dimension 2 or 3, for instance, and assuming you were interested in that manifold, you could draw the phase diagram on that surface.

there is also a thing called "descriptive geometry" which is about representing objects in lower dimensions, I once saw it applied to a phase diagram representation but I've lost the reference and absolutely cannot recall where I might have seen...

an obvious trick is, for instance, to represent a system (x,y,z) where x>0, y>0, z>0, is to have (x,y) in the usual way and z along the -x half-axis, and then you can turn your cube around in space and check whether you are any wiser. My experience with 3-D phase diagrams is that they're a mess, so if you know good examples of understandable 3-D phase diagrams please do give me some reference!

Classroom Tips and Techniques: Estimating Parameters in Differential Equations

In this article, we consider estimating the parameters in a differential equation that governs a physical system from which we've extracted observational data. Although our technique would work for a boundary value problem, we will restrict our discussion to initial value problems.

you're more likely to receive help if you are specific -- type up the ode in Maple syntax, together with initial conditions, and give an example/sample of the experimental data you want to "fit" or compare it with.

I know you have an interest in brownian motions -- dynamic random walks -- so I just want to remind you of the gambler's fallacy, but I'm sure you know about it.

Let me quote from this:

"The probability of a coin coming up “heads” is independent
of the outcomes of past coin flips. A run of several
consecutive tails does not change the odds that heads will come
up on the next flip. If we keep flipping the coin infinitely, the
fraction of heads in the total number of outcomes converges
to 50 percent—the long-run average. The convergence to 50
percent happens not because nature corrects deviations from
the long-run average; rather, the unfolding random process
dilutes deviations from the baseline frequency. To many people,
the concept of dilution is not intuitive; they believe that deviations
from the long-run average in games of chance will be
corrected somehow as the game is played. This correction
process is called mean reversion. This erroneous belief in
mean reversion when outcomes are in fact independent is
known as the gambler’s fallacy."

I'm always looking for tricks like these to learn, some time ago I had an integral a little bit like this one, who knows it may yield to your tricks, thanks for sharing.

This is an application of Ito's Lemma, which is a second-order Taylor expansion applied to an arbitrary function of the geometric Brownian motion. The following proof of Ito's lemma may be useful:

you can then try to transcribe the steps outlined there into Maple...

all the best,

f := x -> sin(x);
g := x -> 1+cos(x);
p := plot(f(g(x)),x=0..Pi/2):

Maple 13 / Classic

plot(cos(x)-x, x = - 10^(-16)... 10^(16));


well I don't know, what you have are assumptions about a stochastic process, a model of pricing, a PDE, some boundary conditions and its solution. It would probably subjugate anyone.

you may have seen this already:

and I noticed that Axel Vogt's blog contains stuff on Black-Scholes:

First 15 16 17 18 19 20 21 Page 17 of 24