Suppose we wish to compute

$\displaystyle \lim_{x \rightarrow 1} \frac{x^2 + x + 1}{x + 3} \, ,$

where $\displaystyle \small f(x) = \frac{x^2 + x + 1}{x + 3}$. Since limits only care about what happens as we approach $\displaystyle x = 1$, why do we then compute the limit by simply plugging in $\displaystyle x = 1$ in $\displaystyle f(x)$?

What if we were to discover that $\displaystyle f(1) = 1337$ but $\displaystyle f(0.99999)$ going towards negative infinity and $\displaystyle f(1.0000001)$ going towards $\displaystyle \pi$, or something random like that? How can we be so sure of this not happening (without actually checking values close to $\displaystyle x = 1$)?

What's the justification for equating $\displaystyle \lim_{x\rightarrow 1}$ with $\displaystyle f(1)$?