Hi everyone. I'm having a lot of trouble with the idea of a solution to a differential equation HAVING to be paired with an interval in order for it to be considered a solution. So, here's a quick example direcltly from my book...
Solve .
Solution: Dividing by x, we get the standard form
.
From this we identify and and further observe that and are continuous on . Hence the integrating factor is
...
They go on to solve the equation, but my question is why are they justified in dropping the absolute value? It is unclear to me.
I think, technically, the solution is valid either on or
I always keep the absolute value signs, unless I'm dealing with an integrating factor that's an even power of When I'm all done solving the equation, and I'm ready to apply the initial condition, then I choose which interval I'm going to use based on which interval contains the initial condition.
Make sense?
As Ackbeet pointed out, unless there is some other condition we weren't told about, we are NOT justified in just dropping the absolute value. But we can say that if x> 0, then |x|> 0 and the given solution is correct. But we could also say that if x< 0, |x|= -x and then integrating factor is x^4.
The critical point is that we have to choose one or the other- the solution cannot be continued past x= 0. Which solution is correct would depend on some additional fact- like an initial value if , for some , x will always be positive and and if , x will always be negative.