For sake of argument, I want to propose this definition of good randomness:

A piece of data can be considered "random" if it impossible to write a computer program* that is considerably smaller (let's say at least 1% less bits) which generates the same data.

(* to be executed on any generic computer that is constructed or defined independently of the given data - to avoid nonsense workarounds like a theoretical computer that just so happens to generate the required data with a single instruction)

Even though it may be hard to measure the randomness in this fashion (it can be quite difficult to prove no such program exists), at least it's easy to determine a piece of data to be non-random if you know how to reproduce it (for example with a pseudo random generator given specific parameters). Also structured or compressable data is typically non random.

Now here's my question: suppose that considering the above criterium, I have a piece of random data A. I also have a piece of non-random data B of the same size. If I mix the two by XOR'ing both chunks of data bit by bit, this will produce another piece of data of again the same size. So C = A xor B. Is this new data C likely to be random as well?