mmcdara

7891 Reputation

22 Badges

9 years, 63 days

MaplePrimes Activity


These are replies submitted by mmcdara

Hi,
I think you will find some useful answers in the "Posts" section (search for "parallel programming"),in particular here
133632-Parallel-Programming-In-Maple

@hajnayeb 

You want to use a non exact Monte-Carlo Method (MCM) to validate exact theoritical results, is that it?
Seems a little bit strange to me.

So, what do you want exactly: 

  • Assess the mean and variance of X(t) for any t with MCM?
  • Assess the autocotrrelation function of X or its spectral density with MCM?
  • Draw a sample of trajectories X(t, omega) (here omega represents "the alea") for different omega?
  • Do all this stuff in the physical space or in the complex dual space?


Two Maple packages could help you if you want to work with frequency response functions:

  • DynamicSystem
  • Finance (stochastic processes)
  • SignalProcessing

If you want to operate in the physical domain it seems to me  that you have practically all you need in my two previous replies.
OK, the autocorrelation is not computed but you can use the Statistics or SignalProcessing packages to do that and this latter to compute the spectral density.


 

@vv 

I was never directly involved in this kind of competitions, just regularly look at the problems and try to find answers as an amusement, no more than that. I always looked at them as a game and thought rather unfair (again no offense) to use a CAS to find the solution.
But I understand that one can consider as a chalenge to solve some of these problems by a CAS.

I recently discovered video games like Euclidea where you have to solve geometric problems. I was stucked by some of them and finally decided to use Maple to obtain the solution (sometimes successfully). But I always thought I was cheating  because it wasn't me who found the solution but Maple, even if I guided it to do so.

I understood you are a mathematician? So let me ask you this simple question: as a mathematician (not as a Maple user), what solution do you prefer, the one you produced with Maple or the one given here / (and especially the first one)?
 




 

Personnaly I never had any problem to include images in a laTeX file when they were exported from Maple in jpeg or pdf format.
Maybe you could try using these formats instead of the encapsulated postscript... except if you really need eps files for some specific reason?
 

Hi, 
No offense, but I'm always dubitative with this kind of exercise. Those kinds of olympiads you refer to are generally aimed to develop the spirit of intuition and synthesis, the "Ha Ha effect" in some sense, and I using a CAS to solve some problems gives me the impression os using a sledgehammer to kill a fly.
Of course, you may think the opposite and see here as good an opportunity to prove that a CAS can solve problems that only clever reasoning can do?

@hajnayeb

A few more results.
Please note that exact results can easily been obtained for a linear ode with a white noise source term in some asymptotic regime (non dimensional damping << 1)  (although not done here because I don't know if this present any interest to you)

As relation (3) demonstrates,the expectation of X(t) depends only on the expectation of F(t), which explains why the plots of X__mean are quite the same whatever the correlation length is.
This is not the case of any higher moment of X(t): forinstance its variance depends on the the value of L.

 

restart:

with(LinearAlgebra):
with(plots):
with(Statistics):

# Step 1: compute the "impulse response function" (IRF) defined as the solution of
# the ode with a Dirac source term at time t.
# To simplify I assume here the ic for the original problem are x(0)=D(x)(0)=0.
#
# It can be shown this IRF is the solution of an homogeneous ode complemented with
# non homogeneous ic:

homogeneous_edo    := m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=0;
non_homogeneous_ic := x(0)=0, D(x)(0)=1/m;
IRF                := unapply( rhs(dsolve([homogeneous_edo, non_homogeneous_ic],x(t))), t);

m*(diff(diff(x(t), t), t))+c*(diff(x(t), t))+k*x(t) = 0

 

x(0) = 0, (D(x))(0) = 1/m

 

proc (t) options operator, arrow; exp((1/2)*(-c+(c^2-4*k*m)^(1/2))*t/m)/(c^2-4*k*m)^(1/2)-exp(-(1/2)*(c+(c^2-4*k*m)^(1/2))*t/m)/(c^2-4*k*m)^(1/2) end proc

(1)

# Step 2: Let F(t) the random excitation and let mu__F(t) its mean and R__F(t__1, t__2) it's
# autocorrelation function.
# Here are some results

# Response X(t) of the system under the random loading F(t)

X(t) = Int(F(tau)*'IRF'(t-tau), tau=-infinity..+infinity);

X(t) = Int(F(tau)*IRF(t-tau), tau = -infinity .. infinity)

(2)

# Mean value of the response

mu__X(t) = Int(mu__F(tau)*'IRF'(t-tau), tau=-infinity..+infinity);
mu__X(t) = Int(mu__F(t-tau)*'IRF'(tau), tau=-infinity..+infinity);  #equivalently

# if F(t) is stationnary so is X(t)

mu__X = mu__F*Int('IRF'(tau), tau=-infinity..+infinity);

mu__X(t) = Int(mu__F(tau)*IRF(t-tau), tau = -infinity .. infinity)

 

mu__X(t) = Int(mu__F(t-tau)*IRF(tau), tau = -infinity .. infinity)

 

mu__X = mu__F*(Int(IRF(tau), tau = -infinity .. infinity))

(3)

# Autocorrelation function of X(t)

R__X(t__1, t__2) = Int(Int(R__X(tau__1, tau__2)*'IRF'(t__1-tau__1)*'IRF'(t__2-tau__2), tau__1=-infinity..+infinity), tau__1=-infinity..+infinity);

R__X(t__1, t__2) = Int(Int(R__X(tau__1, tau__2)*IRF(t__1-tau__1)*IRF(t__2-tau__2), tau__1 = -infinity .. infinity), tau__1 = -infinity .. infinity)

(4)

# Spectral density S__X(xi)
# Let H(xi) the Fourier transform of IRF(tau) and S__F(xi) the spectral density of F(t) then

S__X(xi) = conjugate(H(xi))*H(xi)*S__F(xi);

S__X(xi) = conjugate(H(xi))*H(xi)*S__F(xi)

(5)

# How to draw trajectories: a notional example
# F(t) is a stationnary gaussian random process (look how its point realizations F are constructed),
# with gaussian correlation function (look the expression of the variable K in procedure KG)
# (be free to ask me all precisions you would need).

# Procedure KG generates a random realization of F(t).
# For each such realization there exist a trajectory X(t), solution of the ode with random
# excitation KG(...).
# Nb_traj=10 such trajectories are drawn


KG := proc(X, Y, psi, sigma)
  local NX, DX, K, mu, k, y:
  NX := numelems(X);
  DX := < seq(Vector[row](NX, X), k=1..NX) >^+:
  K  := (sigma, psi) -> evalf( sigma^2 *~ exp~(-((DX - DX^+) /~ psi)^~2) ):
  mu := add(Y) / NX;
  k  := (t, sigma, psi) -> evalf( convert(sigma^2 *~ exp~(-((t -~ X ) /~ psi)^~2), Vector[row]) ):
  y  := mu + k(t, sigma, psi) . (K(sigma, psi))^(-1) . convert((Y -~ mu), Vector[column]):
  return y
end proc:

exp(-2.3)

.1002588437

(6)

# A slightly colored noise (correlation length = L = 0.001, to be compared to max(T)/NX.
# Here L/(max(T)/NX) = 0.001/0.1 = 0.01 which means that only about 10% of the information at location
# t is transmitted at locatrion T+2.3*L (exponential decay). The longer L and the farther the "value"
# of F at location t is transmitted.
#
# Note that theoritical results can be easily obtained in the cas of a linear ode and pure white noise.
# Thus take the following as a kind of "brute force" approach.

NX := 100:
T  := [$1..NX] /~ 10:

Nb_traj := 100:

for traj from 1 to Nb_traj do
  F  := convert( Statistics:-Sample(Normal(0, 1), NX), list):

  X := dsolve(
         {
           m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=KG(T, F, 0.001, 1), x(0)=1, D(x)(0)=0
         },
         numeric, parameters=[m, c, k]
       ):

  X(parameters=[1, 1, 1]):
  pl||traj := odeplot(X, [t, x(t)], t=0..max(T), color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]))
end do:

display(seq(pl||traj, traj=1..Nb_traj))
 

 

# Gather the data of the Nb_traj trajectories in a single matrix (column 1 contains times).
# and do some statistics


data := < op([1, 1], pl||1) | < seq(op([1, 1], pl||k)[..,2], k=2..Nb_traj) >^+ >:

# X__mean(t)

pl__mean := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, title=typeset(X__mean(t)), labels=['t', 'X(t)']
            ):

# X_mean(t) +/- 2 standard_dev(X(t)

pl__plus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])+2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
            ):
pl__minus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])-2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
             ):

display(pl__mean, pl__minus, pl__plus);

 

# A strongly colored noise (correlation length = L = 0.1) or "long memory noise" (a white noise is
# a zero memory noise)

NX := 100:
T  := [$1..NX]:

Nb_traj := 100:

for traj from 1 to Nb_traj do
  F  := convert( Statistics:-Sample(Normal(0, 1), NX), list):

  X := dsolve(
         {
           m*diff(x(t),t$2)+c*diff(x(t),t)+k*x(t)=KG(T, F, 0.1, 1), x(0)=1, D(x)(0)=0
         },
         numeric, parameters=[m, c, k]
       ):

  X(parameters=[1, 1, 1]):
  pl||traj := odeplot(X, [t, x(t)], t=0..10, color=ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]))
end do:

display(seq(pl||traj, traj=1..Nb_traj))

 

# Gather the data of the Nb_traj trajectories in a single matrix (column 1 contains times).
# and do some statistics.


data := < op([1, 1], pl||1) | < seq(op([1, 1], pl||k)[..,2], k=2..Nb_traj) >^+ >:

# X__mean(t)

pl__mean := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, title=typeset(X__mean(t)), labels=['t', 'X(t)']
            ):

# X_mean(t) +/- 2 standard_dev(X(t)

pl__plus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])+2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
            ):
pl__minus := plot(
              [
                seq(
                   [data[k, 1], Mean(data[k, 2..-1])-2*StandardDeviation(data[k, 2..-1])],
                   k=1..numelems(data[..,1])
                )
              ],
              color=blue, linestyle=3
             ):

display(pl__mean, pl__minus, pl__plus);

 

 


 

Download RandomSourceTerm_2.mw

What do you mean by "f(t) is a random function with normal distribution"?
(I remarked youtr equation doesn't contain f anywhere)

Do you mean "the source term is, at any time t, the ralisation of a gaussian random variable"? In this case f(t) (I suppose F(t)) is not a random variable but a gaussian continuous random (stationnary to simplify) process (GRP).
Let F(t, omega) this process: for any value t* of t, Ft*(omega) is gaussian random variable of mean m(t*) and variance v(t*). Stationnarity (at least some type of it) means that these mean and variance do not depend on the value of t*.
But even in this simpler case, giving m and v does not define the process F(t, omega): you need to gige the autocorrelation function of the process, that is a two point function R(t, t') wich "correlates" how Ft(omega) and Ft'omega) are.

Next point: what do you mean by "solve an ordinary differential equation"?
Do you want to find a particular trajectory x(t) (there exist an infinity of them) or some statistical properties such as themean trajectory, the envelope trajectories corresponding to some confidence interval ... ?
In reliability of the structure accoub-nting for random loadings of a mechanical structure is a very common topic.
Think to all of this and comme back to me if necessary.

 

@Carl Love 

As I said to @itsme I'm in hollydays right now and I no longer use my professional Windows PC. But if you're interested in the way I catch the worksheet title I could give the details in 3 weeks.

@Carl Love 

Thank for your involvment.
Your command is significantly simpler than the one I'd written.

In my case  what I wanted to capture is the full name of the "active" worksheet (the one which appears in the uppermost part of the session window when ran on Windows). For instance if I open the file foo with full name "//.../foo.mw", this is the name I'm interested in.

The reason is: suppose I have a worksheet "//.../foo.mw" which contains the code of a module "foo" that I want to use later as a package. At the end of this worksheet are a few commands to create (or replace) a mla file "//.../foo.mla".  I want to keep the bond between the source code (foo.mw) and the archive ("foo.mla"), that is to be sure, when I will use package "foo", that the generating code corresponds to the file "//.../foo.mw" written at some given time T.
I proceed this way :

  • modification of some procedures in the worksheet 
  • execution of the module
  • saving if no error
  • capture of the full name of the source file "//.../foo.mw"
  • capture of its last modification time (plus a few otherinformations such as the name of the person that modifies "//.../foo.mw", etc
  • creation of the archive "//.../foo.mla" 
  • add to this archive additional information:
      SOURCE_FILE ="//.../foo.mw"
      MODIFICATION TIME = T
      AUTHOR = ...
      ...

This is the simplest strategy I found to manage the evolutions of (the packages of) the application I develop.
For example to be sure that I use "//.../foo.mw" at time T and not at time T', or that I use the temporary variant "//.../foo_1.mw" but not variant "//.../foo_2.mw".

@itsme 

You're right, something like 
ps -ef | grep -i "maple" | grep -v grep
doesn't do the jod as tasklist does.

I remember having used this "wmctrl -l" on a Linux workstation to get name of windows (plus extraction of the names of these windows by awk). Never tried this to get the name of the active Maple's window but I think it would worth trying it.
Unfortunately it doesn't work on my Mac.

There is also the Linux xwinfo command, but I think it requires you to point to the window you want to get the name of 

Hi,

I'm in hollyday right now so I can only tell you this from memory: on my professional Windows PC I use to catch the name of the active worksheet by running the tasklist Windows command within a system Maple command.
If tou run the task manager, assuming only a single session of Maple is running, you will see that a task named "Maple xxxx" refers to the active worksheet. Look here tasklist.htm for details about tasklist.

From memory again (I could tell you more in 3 weeks if your are still stucked...), you need to use a TABLE or LIST output format and decode the result with Maple's StringTools to isolate the field named "Maple xxxx" and thus catch the name of the active worksheet.

I never do this on my Mac (neuther Linux) but the strategy should be the same. Let me know if you're using OSX instead of Windows.

@perr7 

Could you give the expression of P_QM ?

Here is a hint 
taylor(MyFunction, x=AboutThisValue, UpToOrder)

I guess you will be capable to replace the names  MyFunction, AboutThisValue, UpToOrder by their correct instances without sending a new question?

 

For your information, LinearFit proposes a lot of different outputs, and among them residualmeansquares which answers your question (but of course you are free to code this if you prefer to)

from the corresponding help page:
The output option can take as a value the name solutionmodule, or one of the following names (or a list of these names): AtkinsonTstatistic, confidenceintervals, CookDstatistic, degreesoffreedom, externallystandardizedresiduals, internallystandardizedresiduals, leastsquaresfunction, leverages, parametervalues, parametervector, residuals, residualmeansquare, residualstandarddeviation, residualsumofsquares, standarderrors, variancecovariancematrix. For more information, see the Statistics/Regression/Solution help page.

 

@Rouben Rostamian  
Hi, 
A clever solution, but it seems you're still a big misunderstood man ;-)

First 104 105 106 107 108 109 110 Last Page 106 of 154