Depends on how numbers are represented
In the formula n > (log(b-a)-log(error)) / log(2), n is the number of iterations, a and b are the ends of the interval that contains the root, error is the desired absolute error (i.e., the maximum difference between the answer and the actual root), and log can be computed to any base. In your case, a = 2^m and b = 2^(m+1) (please re-read your post before submitting to avoid typos).
The issue is what error is. Naively, it may be 2^(-32). The standard floating-point numbers are represented as where s is the significand (or mantissa) and e is the exponent. In the 32-bit (also called single precision) floating-point numbers, significand takes 24 bits. So, for example, numbers between and are represented by 6 bits for the integer part plus 24 - 6 = 18 bits for the fractional part. Thus, the error is . Therefore, . (Note that .) One can check that is the same for other intervals: , , etc.
In your course, it must have been explained how to connect the number of bits in the machine word with the machine precision.