The solution of your problem is
Why not try using some the the usual techniques to numerically integrate the RHS (although I see a potential problem at )?
BTW - what does look like?
Hi everyone,
I am trying to solve a system of the form
where is a decreasing function of x. It's similar to a power law but not quite, so I expect to also be nearly a power law. I have a boundary condition , and . The solution I want will be the initial condition of a differential equation in time, so obviously I want to get this as right as possible.
My problem is this: the values of are defined on a grid with varying grid spacing and I am using a centered, three-point finite difference formula to represent derivatives at each point (except at the outer boundary, where I use a one-sided one instead). If I merely set up my array of y values the naive way (ie. I find that the result does not satisfy the differential equation using the finite difference derivatives. They start off in agreement, but by the time I get to the last few grid points they differ by four orders of magnitude.
Instead, I have tried solving it using the system of simultaneous equations defined by the finite difference formula:
where the come from the finite difference formula (setting :
This system is a tridiagonal matrix and is easily solved, I just have to be careful about the last row because it's a one-sided FD rather than a centered one but that's not much of a problem. Unfortunately, the result I get starts off well but finishes up zigzagging between two very different functions, neither of which resemble the approximate power law I expect. I've been tearing my hear out trying to figure out what's going on. I should be able to get an approximate solution to a well-behaved differential equation using long double precision in C, and it annoys me that I can't. Has anyone seen this kind of behaviour before? Is it typical of the kind of FD formula I am using, and if so how can I deal with it?
Alternatively, if you think my whole approach is stupid and you can suggest a better one, I am all ears.
Thanks,
H.
Hi,
I made a typo in my original post, sorry about that; it should read y(x_0) rather than y(0) so that the solution to the problem is
, and the problem with zero is avoided.
, with the deviations from the power law less than about one part in a thousand.
I have already tried some of the usual techniques for numerically integrating the RHS. This of course gets me a function that looks like what I expect, but when I apply my finite difference scheme to it I get errors that are small compared to the function itself but still a few orders of magnitude higher than I am able to tolerate. That's why I am doing this thing with the tridiagonal matrix- I figure that I should set up the y(x) array so that applying the FD formula to it gives me exactly -r(x)/x^2.
I find that this works stunningly well in terms of accuracy. My errors are reduced to about a billionth of what they were, and well within what I can afford to put up with. The trouble is that the resulting function does not look like what I expect, particularly for high x values. There seem to be two distinct solutions that y(x) zigzags between, and I still get humungous errors at the inner and outer boundaries.
By playing around with my code, I have found that I can suppress the worst of it by increasing the number of grid points and making the grid spacing very small near the lower boundary. But I'm still confused as to why it happens at all.