There are two extreme approaches to function fitting: the first one is that you assume absolutely nothing about the underlying model and the other extreme is that you assume some kind of model for the data and fit the data to that model.
The first method is what is known as interpolation and if you want to get a model that you can actually decipher to get patterns, then this method is pretty much useless, but you can do it if you want to.
The second method (or extreme) is based on projections where you start with a known form of the model (exponential function, Normal distribution, Mixed Normal, quadratic, whatever) and you construct an orthonormal basis and project your signal data to that to get the best approximation of your signal to that model.
If you are trying to fit probability data, then you would use what is called the EM algorithm (or similar ones) where you start off with a distribution that you assume is true and then use your data to estimate the best set of parameters that this distribution could take given the data.
So basically you have to decide where you want to be in this extreme: You can either assume quite a lot about the underlying model and fit the the whole thing to that model, or you can assume absolutely nothing and get something that is super complex that while fitting all the data to the interpolated function, will at the same time tell you absolutely nothing about the function itself.
There are middle grounds in this and it depends on how general your model and how much variation your model has: for example if you have a simple quadratic f(x) = ax^2 + bx + c then this has a lot less variation than say a quartic with f(x) = ax^4 + bx^3 + cx^2 + dx + e. The interpolation is one extreme that will fit a polynomial with the same degree as the number of points and if you have 1000 points, then you probably begin to see how useless it is.