A completeness theorem is a statement that some theory or logic is complete. Now, there are two kinds of completeness: a syntactic completeness and a semantic one. A theory T (i.e., a set of formulas) is syntactically complete (with respect to some logic, i.e., axioms and inference rules) if T is consistent, i.e., T does not derive a contradiction, but if any formula is added to T, it becomes inconsistent. Note that this definition does not refer to models, interpretations, or truth; it only refers to derivability. On the other hand, a logic L is semantically complete if for any formula P, if P is valid (i.e., true in every interpretation), then P is derivable in L. There is also strong semantic completeness for a logic L, which says that if a theory T logically implies P, (i.e., if all formulas in T are true in some interpretation, then P is also true in that interpretation), then T derives P in L. The (weak) completeness is obtained when T is empty. The converse to semantic completeness is soundness.
Thus, there is Gödel's completeness theorem, which says that classical first-order logic is semantically complete. It has nothing to do with arithmetic. On the other hand, Gödel's incompleteness theorem says that arithmetic (which is a theory) and all theories that contain it are syntactically incomplete. The fact that there exists a formula that is true in the standard model of natural numbers but is not derivable in arithmetic is an easy corollary: Suppose that neither A not ~A is derivable; one of them has to be true. Gödel's second incompleteness theorem gives a specific example of a formula such that it, as well as its negation, are not derivable in arithmetic: this is the formula that expresses the consistency of arithmetic.