This is from a water resources paper that I'm reading that uses Bayesian inference. It is a simple problem, but I don't know if I am missing something.

Assume that you have a vector of imperfect observations Xo of a true unknown variable Xt and that the errors e are additive:

Xo = Xt + e (EQ 1)

I would like to compute the conditional probability p(Xo|Xt) for the case that the variable is exact.

The solution is given in the paper as:

p(Xo|Xt) = e (Xo - Xt) (EQ 2)

So according to Bayes Theorem:

p(Xo|Xt) = p(Xt|Xo) * p(Xo) / p(Xt) (EQ 3)

If the variable is exact I assume that it means that the following conditional probability is equal to one: given an observation Xo, the probability that it is equal to its true value Xt. That is:

p(Xt|Xo) = 1 (EQ 4)

That leaves me with the ratio of the priors:

p(Xo|Xt) = 1 * p(Xo) / p(Xt) (EQ 5)

I can substitute EQ 1 into EQ 5:

p(Xo|Xt) = 1 * p(Xt + e) / p(Xt) (EQ 6)

This is where I have problems. First of all, I know that there is a possibility of observing the Xo that gives as a result the corresponding true value Xt with perfect accuracy since the variable is assumed to be exact. But according to Bayes Theorem also exists the possibility that measuring not Xo (~Xo) would result also in the true value Xt, that is: a false positive? In other examples it is easier for me to understand the concept of ~Xo, but what does it really mean that we are observing ~Xo rather than Xo? Secondly I know that the distribution of the sum of two random variables (such as in the numerator of EQ 6) is a convolution of the individual pdfs and therefore depends on the statistical distribution of the original variables. Is this how this should be solved or the solution is simpler?

I appreciate any help you can provide me.