Inductive proofs are used in situations where only integer (or possibly rational) values make sense...
it depends on what is being proved.
for some statements, they only make SENSE if n is an integer (such as when we talk about divisibility).
SOME inductive proofs can be extended to the rational numbers, by factoring the numerator and denominator into prime numbers, and cancelling all the common prime factors. the heart of most such proofs is:
two rational numbers a/b and c/d are equal if and only if ad = bc, which is a statement about integers.
but to give a "non-integer" example:
if f(x) = xn, then f'(x) = nxn-1 for x ≥ 0
hold true when n is any real number, but clearly one cannot use induction, here. the problem here is: what does xn mean when n is irrational?
if we think of this as a function of n, for each value x = a, we seek a continuous (and even more, differentiable) function g so that:
g(0) = 1, g(1) = a, and g(n) = an, for all rational n, and which satisfies the rule:
g(m+n) = g(m)g(n). this, it turns out, is a delicate matter, which involves things like logarithms and exponential functions.
(oddly enough, logarithms which are "exponentials backwards" (inverse functions of logarithm functions) came FIRST: that is functions which turn products into sums f(xy) = f(x) + f(y)
were more convenient for sailors who had to do math in order to navigate accurately, since sums are easier to estimate than products are).
the point is: even when a statement makes sense of a larger set then the integers, and even when the statement is TRUE for a larger set than the integers, the method of proof may be quite a different story, and the inductive proof may give no clue as to how to prove the more "general case".
you can turn this around, and ask: for how big an enlargement of the integers does a statement which is true for integers remain true? to give an example:
the statement (1+x)n ≥ 1 + nx is true for all real numbers x ≥ -1 and all non-negative integers n. it turns out it is ALSO true for all real numbers x ≥ -1 and all real numbers n ≥ 1 or n ≤ 0. but the proof of this (in the case that n is real) uses VERY different methods. note that the real version has a "gap", whereas the integer version does not.
the process by which we get an "enlargement" of the integers, or rational numbers, may be an involved one...the actual construction of the real numbers involves concepts that "seem to come out of nowhere". the real numbers, for example, have a finely-patterned structure that is more "liquid" then the "sand" of the rationals. often, to "complete" a proof that can be proved by induction on the integers (or naturals, or rationals, or whatever), methods that take advantage of this fine structure HAVE to be used. and there is not guarantee that even the best methods we have available will yield results.
for example, quite a bit is known about the numbers ab, when a and b are rational. comparatively little is known about the irrational counter-parts of these numbers. for example, it is currently unknown if πe is rational or not (although it "probably" is irrational). there's "so many" real numbers, that in point of fact, it's a major triumph when we're able to prove anything at all about them! the sad fact is: we know relatively little about mathematics in general (although it may SEEM like a lot, to you). most of what we DO know is "special cases" (because these are simple enough for us to PROVE). most of the structures which we can conceive (and there's probably many more we just haven't thought of, yet) get extremely complicated, even with just a few building blocks.
here is an example:
suppose i have a 2-letter alphabet: A and B. i can make "words" in this alphabet by just stringing letters together:
and so on.
now suppose i have an "anti-A" and an "anti-B" (if you like, you can think of these as -A, and -B, but i will write A* and B*), which i add to the alphabet. to our one rule of combination, string+string = longer string, i add one rule of cancellation:
whenever A, and A* or B and B* are adjacent, i erase them both, and "close the gap":
ABBA*BB*A = ABBA*A (i eliminated the pair BB* near the end)
= ABB (i eliminated the pair A*A at the end).
it turns out that this is already a very unwieldy system, that contains "sub-systems" of greater complexity than it itself possesses! it's sort of like a "master code" that contains all other coding systems "inside" it (if you think of all the different possible combinations of A,B,A* and B*, some of which might be VERY long, you can perhaps glimpse why this might be so). but its building blocks are very simple: we have just 4 things we're combining. it turns out that saying anything at all about this mathematical system is VERY difficult. there's even a copy of the integers inside it:
[ ] (blank) <-->0
A* = -1
A*A* = -2
and so on.
well, extending facts about integers to this "big system" (with just another thing like the integers "mixed in") is HARD. some of the things about integers just aren't true in this larger system, and some of the things are, and it's often impossible to tell without a LOT of effort.
now imagine a similar system, with an INFINITE number of "building blocks". it simply boggles the mind, how can we separate the "useful stuff" from the "random stuff"?
i bring this up just to point out that things are not as cut-and-dried as you might imagine. even "small" structures can quickly "get out of hand" because the ways of combining things can grow VERY fast. we often ignore the "general case" and focus on the "special case" just because that's all we CAN do, with our present limited understanding. very smart people are working on very hard problems for decades at a time, and success is NOT guaranteed.