I am reading (or better said - trying to read) Computational Complexity - A Modern Approach. I have both the draft, which circulates online and the book.

In the book and the draft, there is this claim about simulating multiple tape machines with single tape one. It seems to have different (but probably inconsequential) variations in the book and in the draft.

The draft:

For every , time-constructable if f is computable in time T(n) by a Turing machine M, using k tapes (plus additional input and output tapes) then it is computable in time by a Turing machine M' using only a single work tape (plus additional input and output tapes).

The book:

Define a single-tape Turing machine to be a TM that has only one read-write tape, that is used as input, work and output tape. For every , time-constructable if f is computable in time T(n) by a Turing machine M, using k tapes, then it is computable in time by a single-tape Turing machine M'.

The basic idea is that you can "compress" k tapes into one for the price of at most 5kT(n) increase of running time. The chosen approach for encoding the k tapes on one is to place the frist symbol of the each tape at the first k positions of the tape of M'. The second symbols get to the second k positions on the resulting tape and etc. For example:

tape1: >alpha

tape2: >beta

tape3: >gamma

becomes

>abgleaptmhama_a

The alphabet of the machine is extended twice and each symbol gets a "twin" which is used only if the emulated cursor of the emulated tape is on it.

So M' works by scanning the tape once forward to read all of the k symbols that are under the emulated cursors of the emulated tapes, then uses the transition function of M to discover what is to be written and in which direction is to be moved each of the k emulated cursors. The backward movement is used to write this information.

Since for T(n) steps the cursor of any tape of M can go at most to T(n)-th position, the cursor of M' can-t go much beyond kT(n)-th position of the tape of M'. Sweeping back and forth then must take at most 2kT(n) steps.

Why 5?

I've consulted a book by Papadimitriou, but the same problem there is addressed by using different encoding - all tapes are concatenated together. So there is need when an emulated cursor moves right to a blank position, all the following symbols must be shifted to the right. In my humble understanding this gives at most little more than 4kT(n) steps per one step of M, which makes 5kT(n) more plausible.

But the Turing machine described before doesn't seem to have such problem, nothing must be shifted when a cursor needs to move to the right over a blank. I can't see it, and because I'm dumb and lazy I don't want to sit there in Excel and make 100s of rows of Turing machine code. Can someone help me?