# Thread: Help converting 2d matrix coordinates to 1d array

1. ## Help converting 2d matrix coordinates to 1d array

Hello kind folk, I am in the unfortunate position of being terrible at math but still trying to program computers. My apologies for my ignorance and slowness (liberal arts major, go figure ).

As the title says, I've been trying to figure out how to convert 2d matrix coordinates (where x = column and y = row) to a 1d array (a long list of all the points in the matrix). The array starts at 0, as most programming languages do. So the array is just a list of all the matrix cells as if you took each row or the matrix and set it at the end of the previous row. For example, from this matrix:
Code:
._ _ _.
|_|_|_|
|_|_|x|
|_|_|_|
to this array:
Code:
._ _ _ _ _ _ _ _ _.
|_|_|_|_|_|x|_|_|_|
Unfortunately, it's not as simple as x * y. I'm actually "embedding" a smaller matrix into a large parent matrix -- a matrix within a matrix (metamatrix?). So my trouble has been finding an equation that will convert the embedded matrix coordinates to the larger matrix coordinates, THEN converting those coords into its corresponding index/position in the list (of the parent matrix). So far, I've actually gotten this to work:
Code:
parent matrix array index = parentMatrixNumColumns * embedMatrixCell.y - (parentMatrixNumColumns - embedMatrixCell.x)
Thing is, that only works for arrays that are 1-based, as in they start at 1 (instead of starting at 0). Does that make any sense? Any ideas?

2. Originally Posted by vonWolfehaus
Hello kind folk, I am in the unfortunate position of being terrible at math but still trying to program computers. My apologies for my ignorance and slowness (liberal arts major, go figure ).

As the title says, I've been trying to figure out how to convert 2d matrix coordinates (where x = column and y = row) to a 1d array (a long list of all the points in the matrix). The array starts at 0, as most programming languages do. So the array is just a list of all the matrix cells as if you took each row or the matrix and set it at the end of the previous row. For example, from this matrix:
Code:
._ _ _.
|_|_|_|
|_|_|x|
|_|_|_|
to this array:
Code:
._ _ _ _ _ _ _ _ _.
|_|_|_|_|_|x|_|_|_|
Unfortunately, it's not as simple as x * y. I'm actually "embedding" a smaller matrix into a large parent matrix -- a matrix within a matrix (metamatrix?). So my trouble has been finding an equation that will convert the embedded matrix coordinates to the larger matrix coordinates, THEN converting those coords into its corresponding index/position in the list (of the parent matrix). So far, I've actually gotten this to work:
Code:
parent matrix array index = parentMatrixNumColumns * embedMatrixCell.y - (parentMatrixNumColumns - embedMatrixCell.x)
Thing is, that only works for arrays that are 1-based, as in they start at 1 (instead of starting at 0). Does that make any sense? Any ideas?
Let offset_x and offset_y denote the sub-array reference cell coordinates in main array coordinates (that is the (0,0) cell of the sub-array is cell (offset_x,offset_y) in the main array).

then:

main_x=offset_x+sub_x
main_y=offset_y+sub_y

where main_. denotes the cell index in the main array corresponding to sub_. in the sub-array.

CB

3. Thanks for the reply CaptainBlack!

My main trouble (I think) though is finding offset_x,offset_y. The sub-array (what I call matrix because array is something else entirely in programming), always being smaller, is surrounded by the main array. So there's usually a few main cells on the left and right of the sub-array's cells.

This means that in order to get to the main array's next row in the lists' index I have to increase the index "cursor" (current position in the list) by a specific amount (determined by the position and dimensions of the sub array). I should mention that I iterate through the main list index. What I'm trying to do is catch the iteration when it runs into a cell "owned" by the sub-array in order (so the form of the sub-array appears as it would when sketched in a 2d grid).

I've attached the image of my problem. I need to find equations that can give me the main list index number based and the index offset from the end of the sub-array to the beginning of the sub-array in the next row. The only givens I have are the sub-array's position, dimensions and the main arrays dimensions (it's position is always 0,0).

Thanks for your time Matrix math and lists are really, really confusing (I'm embarrassed by how much time I've spent on this problem).

4. Originally Posted by vonWolfehaus

My main trouble (I think) though is finding offset_x,offset_y. The sub-array (what I call matrix because array is something else entirely in programming), always being smaller, is surrounded by the main array. So there's usually a few main cells on the left and right of the sub-array's cells.

This means that in order to get to the main array's next row in the lists' index I have to increase the index "cursor" (current position in the list) by a specific amount (determined by the position and dimensions of the sub array). I should mention that I iterate through the main list index. What I'm trying to do is catch the iteration when it runs into a cell "owned" by the sub-array in order (so the form of the sub-array appears as it would when sketched in a 2d grid).

I've attached the image of my problem. I need to find equations that can give me the main list index number based and the index offset from the end of the sub-array to the beginning of the sub-array in the next row. The only givens I have are the sub-array's position, dimensions and the main arrays dimensions (it's position is always 0,0).

Thanks for your time Matrix math and lists are really, really confusing (I'm embarrassed by how much time I've spent on this problem).
That was the whole purpose of having "offset_x" and "offset_y". In the example you give, because the array starts at (5, 6), offset_x is 5 and offset_y is 6. (assuming 0-based arrays).