Archive for category Development
Recently, Martin implemented a new camera interface in the ITU Gaze Tracker. This interface allows us to make all the code in the tracking library camera-independent, since now all camera-specific features are taken care of by the interface. It also makes it easier to add support for new cameras, so if you have a camera you would like to add support to (for example Point Grey cameras) and have some coding skills, feel free to give it a try!
The camera/device management has been extracted into a new namespace, GTHardware. When the Camera class is created it will check which devices are connected and initialize the device. Each device extends the CameraBase class which contains a set of methods that are required. The GTHarware.Camera contains a static singleton reference to the active camera which can be accessed through the Instance property.
|Initialize||Prepares and initializes the camera, creates handle, allocates image memory etc|
|Start||Starts capture, raises OnImage event of type ImageEventArgs when a frame arrives|
|SetROI||Takes a System.Drawing.Rectangle and sets the Region of Interest on cameras supporting this, this is determined by fixed property IsSupportingROI|
|GetROI||Returns System.Drawing.Rectangle of the set ROI|
|ClearROI||Clears the Region of Interest|
|Cleanup||Disposes memory, closes handles and stops camera if running|
|int||Width||Gets the current (last) image width|
|int||Height||Gets the current (last) image height|
|int||DefaultWidth||Gets the width of the full frame image (non-roi)|
|int||DefaultHeight||Gets the height of the full frame image (non-roi)|
|int||FPS||Current Frames Per Second|
|bool||IsSupportingROI||False for DirectShow devices, true for some machine vision cameras|
|bool||IsROISet||Returns true if the Region of Interest has been set|
|bool||IsSettingROI||Returns true if the camera uses an asynchronous method to set Region of Interest and this operation has not completed yet|
|event||OnImage||Tracker subscribes to an event handler of type ImageEventArgs which is raised when a new frame arrives. It contains a Emgu/OpenCV image of type Image|
Edit: See the discussion on the forum that concerns adding new devices.
Yesterday, 23rd November, we reached the impressive amount of 10.000 downloads of the binaries of our open-source gaze tracker from SourceForge. This happens approximately one and a half years since we made our first release, back in April 2009. Furthermore, the trend shows an increase of downloads every month, which encourages us to continue the development of the software to add new features and make it as robust as possible.
As always, we welcome collaboration from everyone interested in giving back something to the community. This can take the form of an economic donation via PayPal so we can buy and test new hardware; helping in the development of the software and/or gaze-based applications; and writing guides explaining a custom setup in our forum.
We are very happy that more and more people have access to a traditionally expensive technology, and hope that the spread of the open-source eye tracker will lead to new applications and human-computer interaction paradigms in the near future. We encourage you to be a part of it.
Pushing the limits, +500Hz remote monocular tracking at 60cm. Using aggressive settings demonstrate the capabilities of the GT2 tracking engine. A more practical configuration would be 300Hz which gives a little more room to move.
We have released the first beta of the Gaze Tracker 2.0!
- Automatic camera Region Of Interest setting based on detected features (only for the Thorlab camera)
- Repositioning of the ROI as the head moves around.
- Camera settings panel is now integrated in the main Settings panel.
- Calibration report now shows the accuracy in degrees. This is calculated using a default distance from user to screen of 60 cm; the value can be changed clicking on it.
- The visualization mode is selected in the main screen.
- New configuration panel to place the light sources according to the physical location. Users can select 1, 2 or 4 light sources (the code to detect 4 light sources is not implemented yet). The information regarding the location of the light sources is employed to improve the robustness of glint detection routines.
- Performance information (frames per second, CPU use, RAM use) when hovering over main screen.
- Many fixes in the calibration routines
- New sliders in the pupil and glint configuration, value can also be input typing the number.
- Fixes in the fixation detection, cursor position should be more stable now.
- More fixes I cannot remember now ^^
Screencast to configure the different parameters:
We aim for a final 2.0 version before Christmas.
Discuss in our forum.
HD video available (click 360p and select 720p)
The latest version now includes performance counters that keeps track on processor load, memory usage and tracking speed. If the processor or memory utilization is above 50% the numbers will turn red. If the actual tracking speed is less than 50% of the camera capture rate the fps number turns red. Simple and intuitive.
The new 2.0 adds a feature to simplify the setup by a drag and drop interface to specify the physical placement of components. The information will then be used in the image processing routines to detect features matching the setup. The configuration window will automatically launch if no previous configuration exists.
Using the new instant accuracy reporting function that Javier implemented I have incorporated it to give direct feedback upon completed calibration. While it is an estimate based on default distances it provides an instant accuracy number that I’ve never seen in any of the commercial offerings. Further on we’ll have the option to configure the variables so values are calculated to the individual configuration (for example 70 cm distance instead of 60cm default).
While the 0.75 degrees of accuracy reported by Javier is impressive, especially at the low price point, I have obtained numbers at 0.3-0.6 while doing remote binocular tracking at 120-150Hz. The difference being the camera and optics used which raises the cost to around $450, still affordable in my opinion.
Binocular tracking @ 150Hz, accuracy estimated to 0.3 degrees of visual angle of a 24″ at 60cm.
High speed remote tracking relies on switching between the full image to detect the eyes and then moving to a smaller region of interest, this allows a partial sensor readout on more advanced cameras which increases the speed dramatically. I have successfully been tracking monocular (one eye) at 350Hz!
We have finished coding the calculation of the accuracy in degrees of visual angle. This is the standard way to report the accuracy of an eye tracking system. Most commercial systems report an accuracy of around 0.5º to 1º.
I’ve just tried running a calibration using a Sandberg webcam with a 16mm lens from DealExtreme and two IR light sources. This is the result:
The reported accuracy (still not displayed in the results) is 0.75º. That’s pretty impressive for a ~$50 system