The "theorem", as you state it, is NOT true. It is not the matrix that must be non-zero, but its **determinant**. If, for example, \(\displaystyle f_1(x)= f_2(x)= f_3(x)= \frac{1}{\sqrt{b- a}}\) then the functions are obviously not independent but \(\displaystyle \begin{bmatrix}c_11 & c_{12} & c_{13} \\ c_{12} & c_{22} & c_{23} \\ c_{13} & c{23} & c_{33}\end{bmatrix}= \begin{bmatrix}1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1\end{bmatrix}\) which non-zero. Its determinant is 0, of course.

Suppose \(\displaystyle pf_1(x)+ qf_2(x)+ rf_3(x)= 0\) for all x. Multiply that equation by \(\displaystyle f_1(x)\) and integrate from a to b. That gives \(\displaystyle c_{11}p+ c_{12}q+ c_{13}r= 0\). Similarly, multiplying by \(\displaystyle f_2(x)\) and integrating from a to b gives \(\displaystyle c_{12}p+ c_{22}q+ c_{32}r= 0\(\displaystyle and multiplying by \(\displaystyle f_3(x)\) and integrating from a to b gives \(\displaystyle c{13}p+ c_{23}q+ c_{33}r= 0\).

That system of equations is equivalent to the matrix equation \(\displaystyle \begin{bmatrix}c_11 & c_{12} & c_{13} \\ c_{12} & c_{22} & c_{23} \\ c_{13} & c{23} & c_{33}\end{bmatrix}\begin{bmatrix}p \\ q\\ r\end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}\)

That matrix has the unique solution p= q= r= 0 (and so \(\displaystyle f_1(x)\), \(\displaystyle f_2(x)\), and \(\displaystyle f_3(x)\) are independent if and only if the matrix of coefficients is invertible, which is true if and only if its determinant is non-zero.\)\)