it turns out that a SYSTEMATIC exploration of systems of linear equations is related to "a new kind of arithmetic", the arithmetic of matrices.

in trying to understand the "how and why" of matrices, one is led naturally to a more flexible concept: that of a vector space. in the language of (finite-dimensional) vector spaces, matrices are the "numerical form" of a linear transformation (just as "coordinate-vectors" are the numerical form of abstract vectors).

so questions about the number and kind of solutions, lead to questions about a matrix (its rank (and the associated column space), its nullity (and the associated null space), its invertiblity), which then lead to the same kinds of questions about a linear transformation (is it surjective? is it injective? is it bijective?).

it turns out that "linear-ness" is the common thread that holds all these ideas together. not all systems of equations ARE linear, but those that are, are ones we can say a great many things about, with a great economy of effort.

for example, the process of putting a matrix in row-reduced echelon form is a systematic way of capturing the "high-school" process of elimination and substitution. just by looking at a matrix in rref, a lot of things can be said about the solutions to the system of linear equations it models. for example, we can tell at once if there are:

1. any solutions

2. a unique solution

3. many solutions (and how many pieces of information we have to supply to obtain a particular solution)

it turns out that understanding vector spaces (in general), explains why this works, and not only that, but allows us to "simplify" the study of vector spaces, by considering in-depth, special subsets, called bases. a basis is like "the reader's digest version" of a vector space, it allows us to express in many cases, an infinite set, by studying a finite subset. for example, although there are infinitely many points in the euclidean plane, we can understand all of these by focusing on "the x-axis and the y-axis", which are just 2 things (more specifically: the unit x-vector (1,0), and the unit y-vector (0,1)).

as one gets deeper into the study of linear algebra, it turns out that several "new kinds of names" for different kinds of matrices (or linear transformations) and/or vectors become useful: so one learns about things like: determinants, the trace, the transpose, orthogonal matrices, eigenvectors, eigenvalues, and so on. there turns out to be unexpected relationships between matrices and polynomials, and things we know about one, can be used to find out stuff about the other.

this is not a very comprehensive answer, and it's not very "rigorous", but then again, your question is kind of vague, too.