# Thread: Linear Programming/ Markov Chain

1. ## Linear Programming/ Markov Chain

We want to minimize $\displaystyle \bold{c} \bold{x}$ subject to $\displaystyle \bold{A} \bold{x} = \bold{b}, \ \bold{x} \geq \bold{0}$.

$\displaystyle \bold{A}$ is an $\displaystyle m \times n$ matrix of fixed constants, $\displaystyle \bold{c} = (c_1, \ldots, c_n)$, $\displaystyle \bold{b} = (b_1, \ldots, b_m)$ (both are vectors of fixed constants) and $\displaystyle \bold{x} = (x_{1}, \ldots, x_n)$ is the $\displaystyle n$-vector of nonnegative values that is to be chosen to minimize $\displaystyle \bold{c} \bold{x} \equiv \sum_{i=1}^{n} c_{i} x_{i}$. So if $\displaystyle n > m$, then the optimal $\displaystyle \bold{x}$ can always be chosen to have $\displaystyle n-m$ components equal to $\displaystyle 0$.

So in a Markov chain, suppose that the algorithm is at the $\displaystyle j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $\displaystyle j-1$ best. We want to find the expected number of transitions needed to go from state $\displaystyle i$ to state $\displaystyle 1$, or $\displaystyle E[T_i]$.

Then why does $\displaystyle E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it.

Ultimately $\displaystyle E[T_i] = \sum_{j=1}^{i-1} \frac{1}{j}$.

Source: An Introduction to Probability Models by Sheldon Ross

2. I guess what I am saying is why they even did that in the first place?

3. Originally Posted by shilz222
So in a Markov chain, suppose that the algorithm is at the $\displaystyle j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $\displaystyle j-1$ best. We want to find the expected number of transitions needed to go from state $\displaystyle i$ to state $\displaystyle 1$, or $\displaystyle E[T_i]$.

Then why does $\displaystyle E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it.
If i>1 then after 1 transition it is in state 1, .. , i-1 with probability 1/(i-1).

So:

$\displaystyle E(T_i)= 1 + \frac{1}{i-1} E(T_{1})+ \frac{1}{i-2} E(T_{2})+..+\frac{1}{i-1} E(T_{i-1})=1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$

RonL