Hello there, as I said in my other thread I'm new here and am struggling with two problems regarding coding theory.

This one is described below:

An error correction encoding/decoding system for storing 4 digit binary sequences is set up using a 5-frame delay cross-interleave of two codes C1 and C2. The code C1 has length 7, distance 3 and has the 4 digit binary strings as the message words for the code. The code C2 is the first-order
Reed-Muller code R(6). The data is stored as a sequence of binary digits on the tape with no alteration (eg modulation) other than the error correction encoding.

(a) The system must be designed so that for some integer r, received words which are distance r or greater from a C2 codeword have all of their symbols marked as erasures. Received words at distance less than r must be corrected by the C2 decoder and the number of errors which the C2 decoder corrects must be maximised. However, it is also required that the chance of a random received word (which is a random sequence of 64 binary digits as a result of a burst error) being undetected by the R(6) decoder is less than 1 10^−11. Find r.

(b) Determine the smallest integer t such that there is a burst error of length t that the system cannot correct (in the presence of no other errors).

I have attempted to find the generating matrix but don't know if this is right, or even the right way of starting the problem. The whole concept of a 5-frame delay cross-interleave is confusing to me.
Any help/pointers on the topic would be greatly appreaciated.
Thanks so much for any help.