You have indeed. Since , if the state 2 is visited infinitely often, the Markov chain will also visit the state 1 infinitely often (if an event has positive probability and you procede to infinitely many independent trials, it will occur infinitely often).

With this argument, you can see that as soon as two states communicate with each other (i.e. there is a path with positive probability from one to another), if one is recurrent, then so is the other one. Thus you can gather the states into "recurrent classes", i.e. classes where the states communicate with each other. This helps to study the recurrence/transience since all states in the same class share the same status.

Right. 1 is transient, and so are 2 and 3 (they communicate with each other).But the same chain with a 4th state that can only be accessed by state 1 and only accesses itself would make state 1 a transient state, right?

You won't be able to find null recurrent states with a finite number of states. Those are always positive recurrent (or transient). This is connected to the existence of a steady-state...What about null recurrent states?

The most simple example of a null-recurrent Markov chain is the symmetric random walk on : it is given at time by where are independent random variables with . Here's an elementary proof why it is null recurrent. First I prove that (time to go from 0 to 1). Either the first step is to the right, and ; or it is to the left, and the random walk now needs to go from -1 to 0 and then from 0 to 1. The time from -1 to 0 is the same as the time from 0 to 1, so we get:

.

We get a contradition if . Now we deduce that as well. This is easy: after the first step, we are reduced to the previous problem (going from -1 to 0, or symmetrically from 1 to 0), so that

.

(A rigorous justification of the previous equalities would involve the Markov property)