Generally speaking, R^2 isn't the best thing to use for model selection - you can make R^2 large just by throwing in meaningless predictors. Why can't you cross-validate the "un-bootstraped" model? IF your goal is to make good predictions you should probably look at something like an estimate of mean squared prediction error.
It isn't clear (at least to me) what you mean by a bootstraped model. Bootstrapping is a very general technique. What exactly is going on with the "bootstrapped" model?