# Thread: Linear Programming/ Markov Chain

1. ## Linear Programming/ Markov Chain

We want to minimize $\bold{c} \bold{x}$ subject to $\bold{A} \bold{x} = \bold{b}, \ \bold{x} \geq \bold{0}$.

$\bold{A}$ is an $m \times n$ matrix of fixed constants, $\bold{c} = (c_1, \ldots, c_n)$, $\bold{b} = (b_1, \ldots, b_m)$ (both are vectors of fixed constants) and $\bold{x} = (x_{1}, \ldots, x_n)$ is the $n$-vector of nonnegative values that is to be chosen to minimize $\bold{c} \bold{x} \equiv \sum_{i=1}^{n} c_{i} x_{i}$. So if $n > m$, then the optimal $\bold{x}$ can always be chosen to have $n-m$ components equal to $0$.

So in a Markov chain, suppose that the algorithm is at the $j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $j-1$ best. We want to find the expected number of transitions needed to go from state $i$ to state $1$, or $E[T_i]$.

Then why does $E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it.

Ultimately $E[T_i] = \sum_{j=1}^{i-1} \frac{1}{j}$.

Source: An Introduction to Probability Models by Sheldon Ross

2. I guess what I am saying is why they even did that in the first place?

3. Originally Posted by shilz222
So in a Markov chain, suppose that the algorithm is at the $j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $j-1$ best. We want to find the expected number of transitions needed to go from state $i$ to state $1$, or $E[T_i]$.

Then why does $E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it.
If i>1 then after 1 transition it is in state 1, .. , i-1 with probability 1/(i-1).

So:

$
E(T_i)= 1 + \frac{1}{i-1} E(T_{1})+ \frac{1}{i-2} E(T_{2})+..+\frac{1}{i-1} E(T_{i-1})=1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$

RonL