Introduction
In my last post I introduced the notion of choreography as a way to deploy an manage application. It could be possible to implement selfhealing, elasticity and in a certain extent self awareness.
To do so, we must not rely on the certainty and the determinism of the automated tasks. Mark Burgess explains in his book in search of certainty that none should consider the command and control anymore.
Actually we grew up with the idea that a computer will do whatever we told him to. The truth is that it simply don't. If that sounds astonishing to you, just consider the famous bug. A bug is a little insect that will avoid any programmed behaviour to act as it should.
In a lot of wide spread software, we find ifthenelse or trycatch statements. Of course one could argue that the purpose of this conditional executionis is to deal with different scenarii, which is true, but indeed, the keyword is try...
Back to the choreography
In the choreography principle, the automation is performed by a set of dancer that acts on their own. Actually, the most logical way to program it, is to let them know about the execution plan, and assume that everything will run as expected.
What I would like to study is simply that deployement without knowing the deployement plan. The nodes would try to perform the task, and depending on the event they receive, they learn and enhance their probability of success.
First problem
First, I'm considering a single node \( A \) which can be in three states \(\alpha\), \(\beta\) and \(\gamma\). Let's \(S\) be the set of states such as \( S = \left\{\alpha, \beta, \gamma\right\} \)
Actually knowing what's expected
The expected execution is: \( \alpha \mapsto \beta \mapsto \gamma\)
therefore, the transition matrix should be:
\[ P=\begin{pmatrix} 0 & 1 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 & 0 \end{pmatrix} \]
Let's represent it with GNUR (see this blog post for an introduction of markov reprentation with this software)


which is represented by:
Knowing part of the plan...
Now let's consider a different scenario. I assume now that the only known hypothesis is that we cannot go from \(\alpha\) to \(\gamma\) and viceversa, but for the rest, the execution may refer to this transition matrix:
\( P=\begin{pmatrix} \frac{1}{2} & \frac{1}{2} & 0 \\\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\\\ 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \) which is represented this way
The transition matrix is regular  we can see, for example that \(P^2\) contains all non nil numbers:


Therefore, Markov theorem says that:
 as n approaches infinity, \(P^n = S\) where \(S\) is a matrix of the form \([\mathbf{v}, \mathbf{v},...,\mathbf{v}]\), where \(\mathbf{v}\) being a constant vector
 let \(X\) be any state vector, then we have \(\lim_{n\to \infty}P^nX = \mathbf{v}\) where \(\mathbf{v}\) is a fixed probability vector (the sum of its entries = 1), all whose entries are positives
So we can look for vector \(\mathbf{v}\) (also known as the steadystate vector of the system) to see if there is a good chance that our finite state machine would converged to the desired state \(\gamma\).
Evaluation of the steadystate vector
Now since \(P^{n+1}=P*P^n\) and that both \(P^{n+1}\) and \(P^n\) approach \(S\), we have \(S=P*S\).
Note that any column of this matrix equation gives \(P\mathbf{v}=\mathbf{v}\). Therefore, the steadystate vector of a regular Markov chain with transition matrix \(P\) is the unique probability vector \(\mathbf{v}\) satisfying \(P\mathbf{v}=\mathbf{v}\).
To find the steady state vector, we must solve the equation: \(P\mathbf{v}=\mathbf{v}\). \(\mathbf{v}\) is actually an eigenvector for an eigenvalue \(\lambda = 1\).
_Note from wikipedia_
In linear algebra, an eigenvector or characteristic vector of a square matrix is a vector that does not change its direction under the associated linear transformation. In other words: if \(v\) is a vector that is not zero, then it is an eigenvector of a square matrix \(A\) if \(Av\) is a scalar multiple of \(v\). i This condition could be written as the equation: \( Av = \lambda v\), where \(\lambda\) is a scalar known as the eigenvalue or characteristic value associated with the eigenvector \(v\)
To compute the eigenvector, we should find the solution to the equation \(det(A\lambda I)=0\) where \(I\) is the identity matrix. Actually I don't know how to do it anymore, and I will simply use R's eigen function:


Wait, it has found 3 eigenvalues, and one of those equals 1 which is coherent. But the eigen vector is not coherent at all with the evaluation of the matrix at step 15.
According to stackoverflow that's because it computes the right eigenvector and what I need is the left eigenvector.
Here is how to evaluate it.


We now have the steady vector : \(\mathbf{v} = \begin{pmatrix}0.48 \\\\ 0.70 \\\\ 0.40\end{pmatrix}\)
which simply means that according to our theory, our finite state machin will most likely end in state \(\beta\).
Analysis
What did I learn ? Not that much actually. I've learned that given a transition matrix (a model) I could easily compute the probability of success. If I consider the finte state machine as the whole automator of deploiement, given the pobability of failure, I can predict if it's worth continuing the deploiement or not.
Cool, but far away from my goal: I want a distributed application to learn how to deploy, cure, and take care of itself with a single information: its topology.
Back to real life, the model I've described in this post could be the observable states of the application (eg: \(\alpha = initial\),\(\beta = configured\), \(\gamma=started\)...)
Hence, the states of the components of the application are hidden from the model (and they must remain hidden, as I don't care observing them)
And this is the proper definition of a hidden markov model (HMM). So yes, there is a Markov model hidden in the choreography!
I shall continue the study and learn how the signals sent from the compenent gives evidences and do influence the Markov Model of my application.
It's a matter of inference, Imaps, Bayesian networks, HMM.... It's about machine learning which is fascinating !