What is the basic principle of numerical differentiation?
Hey Suvadip.
What kind of derivative do you want to calculate? Do you want to for example, take some signal and find a smooth function that approximates that and take the derivative of that function?
Can you give an example of what you want to do?
I can't imagine what kind of answer was expected to that! My first thought was that you cannot just "do the obvious", convert the limit of $\displaystyle \frac{f(a+h)- f(a)}{h}$ to the fraction itself, with small h, because both numerator and denominator are so small "round off error" will be too large. What most numerical algorithms do is approximate f(x) by some specific kind of function, a polynomial or exponential, and take the derivative of that function. But I don't think I would call that the "basic principle".
Hi,
I would disagree a little with the previous response. Sometimes you can get away with using the difference quotient with carefully chosen h. A short discussion is found in Numerical Recipes by Press et. al. I don't have the 3rd edition, but the 2nd edition discussion starts on page 186.