When I first started calculus my teacher taught me to find the derivative of simple expressions using the power rule. It is simple enough to work with but I wanted some sort of proof or reasoning for how it really works. He showed this to me:

Say you have the function y=x

^{2}
First you will write this expression: dy+y= (x+dx)

^{2}
Then: dy= (x+dx)

^{2} -x

^{2}
Then: dy=x

^{2}+ (dx)

^{2}+2x (dx) -x

^{2}
Then you divide both sides by dx to obtain: dy/dx=2x+dx

Then he said something about limits. About how dx 'tends' to zero (what does that mean?

) which makes the expression 2x+dx become 2x+0=2x. Now I don't get how this happens that you make one dx zero while the dx which is under the dy, remains unchanged. What exactly are we doing in this final step? Why and how does dx become zero and what are all these limits? I have tried and failed to understand what is this business about limits in calculus, indeed what they are. And are these the same limits that you put up with definite integrals?

The end result was, I understood that this worked, although it still remains unanswered what you are really doing when you multiply the power of x as its coefficient and subtract 1 from the power of x. Or is the power rule merely a fluke exercise which gives the correct answer?

Please pay attention to each and every question even if it seems just rhetorical.