Well, I would not say that is the "solution set"- rather it is a basis for the "solution space"- every vector in the solution space can be written as a linear combination of those.
Yes, one way to solve such a set of equations is to row reduce the matrix form.
We can put that in reduced row echelon form by adding the second row to the first:
Now, that is the same as the two equation and
(We could also has simply added the second equation to the first. "Row echelon form" simply mimics the usual steps for reducing the unknowns in in a system of equations.)
First, you must recognize that such a system of equations has an infinite number of solutions and there are an infinite number of ways of writing solutions. We can solve each of those two equations for one unkown value, then take the other arbitrarily.
is the same as . Any such solution vector is of the form
Those are the first two vectors in your set.
One way of reducing the other vector is to solve for, say,
is the same as, say, . Any such solution vector is of the form
Those vectors are not the same as you give but are an equivalent solution.
What they appear have done is to look at pairs of indices; to argue that if then we must have so . That gives .
Then, if then we must have so . That gives .
If we must have so . That gives
Finally, if we must have so . That gives .
But, again, there are many different ways to write a basis for any vector space and so many different ways to write a basis for the solution space of a set of equations. This happens to be one of them and, as I indicated earlier, not the way I personally would have arrived at a basis.