Encode a sequence of natural numbers into a sequence of 0's and 1's. Any data can be encoded into a sequence of 0's and 1's.
Is the following function defined properly?
.
We take a sequence of natural numbers, first we change the numbers into their binary representations, then we put before each number, except the last one, where we put a . For example, . We do this so that for every sequence of natural numbers the sequence of 0's and 1's is different, because the 1 on odd position signifies the end of sequence for some natural number. We do this because without using only binary representations would begin the same as .
I am not sure I understand.
A finite or infinite sequence? In the end, you need to encode an infinite sequence.
Again, why do you want to mark the end of a finite sequence when you have to encode an infinite sequence? Also, does 0110011 encode two numbers 11 and 01 or one number 11001?
By "position," do you mean the position of the digit? How can you control it when binary numbers may have even or odd number of digits?
You are asking a rather complicated question in that it requires several lemmas.
Note the set is usually denoted by
Here is a standard theorem: For any infinite set it is the case that . That theorem takes some work to prove.
You need the Schroder-Bernstein theorem, use the characteristic function and the fact that .
For any infinite set.
This seems like a nice method: because a function is a relation, then and .
However, encoding natural numbers in binary is also easy. One way is to repeat every bit of a number twice and insert 01 between the numbers. For example, 2 5 4 in binary is 10 101 100, so the encoding is 11000111001101110000.
Yes I'm encoding an infinite sequence.
I don't think it matters if the binary representation of a number has even or odd number of digits. I probably wasn't clear on this, but we put the 0's before EVERY digit of a binary representation, not just the first one, and 1 before the last one. If we add a single digit before every digit we will get a string of 0's and 1's that's two times longer than it was before. I flubbed writing my example above, it should be . Other example . In the end the only 1 on an odd position is the one before the last digit in the original representation. I do this, because otherwise the sequences , would begin the same way if I were only putting in the sequence the binary representation of numbers. This way for every possible infinite sequence of natural numbers a unique sequence of 0's and 1's would be provided.