Why is outer product not backward stable?

So I am new to the topic of numerical analysis and was trying to figure out why the outer product of 2 vectors is not backwards stable. I understand the equation would be f(a,b) = ab* and that the resulting matrix is of rank 1. My book says that it is not backwards compatible simply because ~f(a,b) is not of rank 1 which means you cannot rewrite the function ~f(a,b) = (a)(1 + e)((b)(1 + e))*. I was trying to set up the proof to ultimately disprove it but I did not even know how to do that. I assume the reason why it is not backwards compatible is that each element in the resulting MxN matrix is going to have a different error associated with it so thats why its unlikely to have a rank of 1. If anyone can clear this up for me I would greatly appreciate it!

Re: Why is outer product not backward stable?

Hey redhawk87.

Just to clarify when you mean the product do you mean a*b^T (as in transpose) where you get a number? (So basically it's an inner product?)

Re: Why is outer product not backward stable?

Thanks for your response!

In my book it calls it an outer product. As in, multiply 2 vectors of size m and n and the result is a matrix of size mxn => (mx1)*(1xn). The inner product (1xm*mx1) is backwards compatible.

Re: Why is outer product not backward stable?

What is this e term you are referring to? Also what is the difference between ~f(a,b) and f(a,b)?

Re: Why is outer product not backward stable?

The e term is epsilon. Epsilon refers to an error of some kind. With numerical analysis there are two types of errors: truncation/round-off errors and "approximation errors" (the second one I cant remember the exact name). With computers, there exists a set of numbers where 1 + e <= 1. This is due to the nature of representing floating point numbers with a finite number of bits. The largest e to satisfy that equation is known as machine epsilon. The ~ represents an approximation. so ~f(x) is an approximate solution to f(x). For instance the taylor polynomial for sin(x) is an approximation of sin(x). The more terms you add onto the taylor polynomial, the more accurate it is which in turn mean less of an error. The algorithm used is just multiplying an element of 'a' with an element of 'b' (ai*bj). This introduces some error. So instead of multiplying ai*bj, your multiplying ai(1 + e(ai)) * aj(1 + e(bi)). So I think what needs to be done is show that the resulting error is not of order machine epsilon which I do not know how to do. Cause to prove backwards stability, you have to look at the magnitude of the resulting error. I think something is considered backwards stable if ~f(x) == f(~x) or something like that. In other words ai(1+e(ai)) * bj(1 + e(bj)) == ai*bj*(1 + e) where e is less than or equal to some constant times machine epsilon. I am not well versed in the subject (thats why im asking a question :)) so hopefully what I said was correct.