My name is Eduardo. I´m doing a low cost eye tracking tool. Basically I use a webcam and I move the mouse according with the user gaze direction
To do this I had to discover the eye pupil coordinate. Later I had to discover how to use that eye pupil coordinate to get the corresponding screen coordinate!
The first part was not difficult to achieve, but the second one is giving me a lot of headaches
Is at this point that I need your precious help!
The goal is get a matrix by calibration, and then apply it to each pupil coordinate, in order to get the screen coordinate.
Basically calibration consists in show 9 points on the screen in specific coordinates (f.ex (0,0);(600,800)...), and when the user looks to them I collect the 9 pupil coordinates measured from the eye corner (f.ex (12,34),(25,17)...). Then I construct that matrix according with the third page of the attached paper, and I apply it to each point as it says.
Everything runs as planned if the eye has a linear behavior.
The problem is that the paper assumes that the eye movement is linear, and obviously our eye don´t have that behavior (for example when you look from a point on the left to a point on the right is obvious that the path will not be always over the same y coordinate) . As the eye behavior is not linear I´ve to discover another way to do this... A colleague of mine says me that I´ve to migrate this to a non linear space...Unfortunatelly my math skills are not very deep
Anyone could help me finding a way to do this?
Best regards and thanks for all
PS: sorry for my English