mmcdara

3728 Reputation

17 Badges

6 years, 23 days

MaplePrimes Activity


These are replies submitted by mmcdara

@tomleslie 

Sorry for this late reply (I thought the thread was closed.
About the first issue: my question was very problem-dependent.
about the second one: as I said before I wasn't aware of this second way to do animation I will keep it in mind and try to use it systematically to see if it's indeed  a good alternative.

Thanks

@BasSPe 

You're welcome

 

@Pepini

Another simple solution to avoid getting the trivial solution t1=t2 when solving these relations

{f(t1)=f(t2) g(t1)=g(t2), t1 <>t2}

is to tell fslove that t1 is to be search in the branch corresponding to increasing values of x, while t2 is to be search in the branch corresponding to decreasing values of x (thus the hird relation is automatically satisfied).

tm := fsolve(diff(f(t), t)=0);
fsolve({f(t1)=f(t2), g(t1)=g(t2)}, {t1=-infinity..tm, t2=tm..infinity})
        {t1 = -1.67539229818264, t2 = 0.108903883303545}


 

@tomleslie 

I use Maple 2020 at the office and its DrawGraph hasindeed  little to do with the one I use at home (Maple 2015).
With Maple 2015 I spend a lot of time modifying the plot structure that DrawGraph returns in order to get something less depressing (as you say). This habit has not left me and I still do the same thing (sometimes, less often) with the DrawGraph of Maple 2020
Maple 2020's DrawGraph (I don't remember what it was in the previous version and I don't have access to Maple 2021 yet) is now a tool that one can use to present graphs in a report without receiving smirks or mocking reflection

@acer 

Thanks, my reply here 232596-State-Transition-Diagram

@acer 

I came here while following the thread 232600-How-To-Use-Animate-With-Matrix-Plot
I hadn't seen that you had contributed to this one, sorry.
That's fine, I'm never very happy being the only one trying to provide an answer, especially if it develops in several directions.
 

@acer 

I thank you on behalf of @ogunmiloro for these adjustments.

 @Kitonum  @acer  @tomleslie 
(the order has nothing to do with my preference to your answers, it is just lexicographical)

I first thought it was an issue with matrixplot itself, hence the workaround I thought of; but itt didn't work either, and using "op" I saw that P^1 was actually P^1.  hence round(n) to improve the workaround.
And this is where I think I was a bit stupid: I was so happy to have found a solution that I didn't even think of using round(n) (or trunc(n))  in my original question.

You have all more or less provided a common solution, so collectively thank you all.
In the details :

  • I didn't know the insequence=true option you used Tom, it seems interestinf for it leads to a rather "simple and natural(?)" way to generates the desired sequence.
  • Your second example acer is very close to my first attempt, except that it walks, it.
    You wrote"while retaining your original's default of 25 frames -- intentional by you, or not..." no, it wasn't intentional at all.
    To answer your last question; yes it's related to my  responses (and thus n a strictly positive integer.
    And yes I'm aware that the power of a matrix can be more efficiently computed if we factorize it in a suitable way.


One last question specifically for acer: since you seem to have a better memory than I do, do you remember that I asked a long time ago about the fact that the view does not automatically adapt to the current image and keeps the ranges of the first view?
I can't find the answer you gave me, does it ring a bell?


Thanks again to you all

 

it is enough to force the option showweights=true in DrawGraph.

I did not delete this thread because someone may have been responding to it.
A moderator can delete it if he thinks it's better

Sorry for the inconvenience

@ogunmiloro

This little procedure will help you to visualize how theweights of the edges evolve as the Markov chain advances
 

WM := P -> proc(n)
  local Q := P^round(n):
  uses plots, plottools:

  display(
    seq(
      seq(
        translate(display(parallelepiped([1, 0, 0], [0, 1, 0], [0, 0, Q[i,j]])), i-1/2, j-1/2, 0),
        i=1..8
      ),
      j=1..8
    ),
    view=[default, default, 0..max(Q)]
  )
end proc:


# usage :
plots:-animate(WM(PP), [n], n=1..40, frames=40)

The height of the consecutive frames is not rescaled and always equal to the initial one.
This reminds me of a question I posed about this problem but I couln't retrieve the answer.

I've found this (lengthy) workaround

WM := P -> proc(n)
  local Q := P^round(n):
  uses plots, plottools:
  display(
    seq(
      seq(
        translate(display(parallelepiped([1, 0, 0], [0, 1, 0], [0, 0, Q[i,j]])), i-1/2, j-1/2, 0),
        i=1..8
      ),
      j=1..8
    )
  )
end proc:

plots:-animate(WM(PP), [n], n=1..10, frames=10)

 

@ogunmiloro

EDITED  7/23 2:16 pm


A correction to the procedure that displays the successive transition graphs

G := proc(P, n)
  local d  := numelems(P[1]):
  local Pn := P^n:
  local Q  := Matrix(d$2, (i,j) -> Pn[i,j], datatype=anything):
  local V  := [$"a".."h"]:
  local E  := {
                seq(
                  seq(
                    `if`(
                      Q[i,j]<>0, 
                      [[V[i], V[j]], evalf[3](Q[i,j])], 
                      NULL
                    ), 
                    i in subsop(j=NULL, [$1..d])  # loops not supported in Maple 2015
                  ), 
                  j=1..d
                )
              };
  print(DrawGraph(Graph(V, E), showweights=true))
end proc:


G(PP, 1);
G(PP, 2);


In your particular case the initial graph contains 28 arcs and the graph of PP^2 has 54 of them.
I have found that DrawGraph doesn't display (Maple 2015) the weights if the number of graphs exceeds 45 unless the default option showweights=true is explicitely.

@ogunmiloro 

Thanks for voting up.

Meanwhile I have treatedyour last problam.
All the details are in the attached file:

  • I used the classical approach based on thecomputation of the left eigenvector of the stochastic matrix which is associated to the eigenvalue 1.
    To do this I simply used the build-in procedure LinearAlgebra:-Eigenvectors.
    You will find in the first part of the fila that this requires some precautions.
     
  • The result was very surprising and ... wrong.
    After a quick checking it appeared that your matrix is not a stochastic matrix for some of its rows do not sum to 1.
    I took it upon myself to arbitrarily modify your matrix (see the second part of the attached file) by changing the values of the 7th column: of course you will have to do yourself the correct modifications.
    This correction gives now a correct result.
     
  • Finally the last part of the attached file contains a procedure named StationnaryState which returns the stationnary state vector of probabilities.
    I have include several controls in this procedure to verify if the input matrix is correct and if the result are mathematically correct too.
    It could happen that for some reasons (precision of the computation for instance) these checking test return an error.
    Among the solutions you can:
    • increase the number of digits
    • use SingularValues instead of Eigenvectors

 

Enjoy

re-markov_mmcdara.mw

@ogunmiloro 

Listen, I can do some stuff on this new problem, but you could at least vote up if you already found my contribution useful. Don't you think?


PS : I see from this last graph that you use (like me) an "old" version of Maple.
Since Maple 2020 (I believe), such directed graphs can handle loops (for instance , if the probability of transition from state 1 to state 1 is 0.8, the graph will displat a loop from vertesx 1 to vertex 1 labelled 0.8)
If you have the opportunuity to access such a version uou will find it very interesting in visualizing incidence graphs.

@ogunmiloro 

Question 1:
You have 8 states and among them 6 are "absorbing states" (for an absorbing state i Prob(i --> i) =1).
The two remaining states are named "interior states" ("interior state" and "absorbing state" are word to word translation from French, my native language, and I hope they are understanble my a english reader).
In your case none of these states really extinct.
Extinction is a feature of the stationnary state which, for a two state problem is of the form described by q.6 in my file:

               [   -1 + P__2        -1 + P__1   ]
               [---------------, ---------------]
               [P__1 + P__2 - 2  P__1 + P__2 - 2]

if P__1 and P__2 are strictly positive none of these two stationnary probabilities is null.


Question 2:
Does the method I used have a name?
I'm not sure.

The classical method to find the stationnary state is to compute the left eigenvector V of matrix P associated to the eigenvalue 1.
That is V.P = V :

stat_state_as_vector      := `<|>`([u,v]);
vector_equation_to_verify := stat_state_as_vector . P - stat_state_as_vector =~ 0;

solve([entries(vector_equation_to_verify, nolist)], [u,v])

                  [[    (-1 + P__2) v       ]]
                  [[u = -------------, v = v]]
                  [[      -1 + P__1         ]]

# this vector is identical to those of equation (6) up to a scaling constant:

simplify(stationnary_state /~ stationnary_state[2])
                         [-1 + P__2   ]
                         [---------, 1]
                         [-1 + P__1   ]


The naive way to compute the stationnary state is to raise the stochastic matrix P to a large power (check that this power is large enough by doing the same operation for an even larger power).
This is base on the fact that if P^n converges to something, then this something is the stochastic matrix of the stationnary state.

Another characterization is (if convergence does occur)  to observe that iterated stochastic matrices at time steps n+1 and n are related by the recurrence relation P(n+1) =P(n).P(0).
For a two states process it's easy to see that each P(k) depends only on 2 parameters, lets say P1(k) and P2(k) ; based on the relation P(n+1) =P(n).P(0) it's very easy to derive a couple of recurence relations for P1(n+1) and P2(n+1).
If convergence does occur, then the limit values of  P1(n+1) and P2(n+1) are the stationnary transition probabilities we are looking for.
The stochastic stationnary matrix is then < V , V >  where V is the row vector  `<|>`( [ P1(+oo) ,  P2(+oo) ] ) and P1(+oo) means "limit of P1(n) when n --> +oo).


What does convergence mean?
Take this example

R__0  := Matrix(2, 2, [0, 1, 1, 0]):  # initial stochastic matrix
R     := n -> R__0^n                  # iterated stochastic matrice (R(1)=R__0 

R(3);                                 # R(3) = R__0 
                                      # thus R(2*p) = R__0 for all positive integer p

R(2);                                 # is the anti diagonal matrix 
                                      # thus R(2*p+1) = R(2*p).*R__0 = R__0.R__0 = R(2°
                                      # is this same anti diagonal matrix

as n --> +oo  R(n) is either R__0 or the anti diagonal matrix depending on the oddity of n.
Thus the characterization "the stationnary state is approximated by R__0^n for n large enough" doesn't apply.

What about the recurrence relations?
 They give an absurd result

rsolve({a(n+1)=b(n), b(n+1)=a(n)} union {a(0)=0, b(0)=0}, {a, b});

                      {a(n) = 0, b(n) = 0}


Finally, what about the eigenvector approach?

stat_state_as_vector      := `<|>`([u,v]):
vector_equation_to_verify := stat_state_as_vector . R__0 - stat_state_as_vector =~ 0:

solve([entries(vector_equation_to_verify, nolist)], [u,v])
                        [[u = v, v = v]]

A unique solution is return, and not an absurd one for the stochastic matrix (after having normalized this eigenvector) is
the 2 by 2 matrix hose elements are equal to 1/2.


So, is the recurrence based construction a better alternative than the eigenvector based construction?
Obviously not, because it can lead to wrong results (the stochastic matrix would be in this non convergence case the zero 2 by 2 matrix).
Probably not (from a computational efficiency point of view) if the number of interior states is large.
But I believe it's a simple and understandable way to explain what a stationnarity state is (providing convergence) when there are only 2 interior states.

An argument for the recurrence based approach is that it gives explivit expressions of the probabilities for any step of the Markov chain (equation (5) in my file)

Edited file Markov_process_2.mw


 

First 11 12 13 14 15 16 17 Last Page 13 of 86