Update How tos: Tracking authored by Fabian Berg's avatar Fabian Berg
...@@ -222,4 +222,91 @@ parChatWord(0) ...@@ -222,4 +222,91 @@ parChatWord(0)
~~~ ~~~
:arrow_right: deactivate all pins (2-9) :arrow_right: deactivate all pins (2-9)
Keep in mind that every option mentioned in this chapter needs a valid connection via parallel port! Keep in mind that every option mentioned in this chapter needs a valid connection via parallel port!
\ No newline at end of file
## Tracking
#### List of necessary files and scripts to start
- [default camera configuration file](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat)
- [frameAcquired_box.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/frameAcquired_Box.m) / [frameAcquired_arena.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/frameAcquired_Arena.m)
- mask (optional)
- [startPointGrey.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m)
- [videoTracker.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)
- [getMouseTracking.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/getMouseTracking.m)
- [conversionMatrixCreator_Birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) / [conversionMatrixCreator_Human](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m)
:arrow_right: most basic files, it’s basically everything that is in [the folder](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/tree/Feature_CameraTracking/Tracking) as well
#### Start Tracking Step by Step – Skinner Box
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
2. get necessary add-ons for MATLAB
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 75
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
6. define your mask and set mask to custom in preview (or use one of the default masks)
7. use a conversion matrix creator to calibrate your tracking (only necessary in Skinner Box)
#### Start Tracking Step by Step – Arena
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
2. get necessary add-ons for MATLAB
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 50
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
6. define your mask and set mask to custom in preview (or use one of the default masks)
#### In-depth explanations
The camera provides a 640 x 512 pixel image (full screen). Since not all components of this camera image are necessary for tracking (it might contain walls or parts of the apparatus irrelevant for tracking), an additional mask can be used to specify the region of interest. This logical mask has the same size as the (fullscreen) camera image and defines the pixels used for tracking (any shape made of 'true' within a as 'false' predefined matrix can be used). Artifacts outside of the area of interest can be ignored via the mask. The [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) produces x, y coordinates and angle of an either black or white target object (or both). Tracking coordinates can then be used to reconstruct the path of an animal or the coordinates of a peck on a screen. For tracking with two cameras in a skinner box, only the x-values of each camera are processed further. On camera is set up so that it forms an x-axis relative to the experimental screen, the other to form the y-axis. For both, the relevant information is given by the x-value of the tracked point in the camera image. Those are converted to points on the screen.
**Start Tracking**
In the very beginning, you should start by looking at the camera images in FlyCapture. You should configure each camera image until it is sharp and well lit. If nothing works you can manually adjust shutter and focus on the cameras themselves. Start MATLAB and make sure all necessary files and MATLAB-scripts are on your current path. Specify in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) if you are tracking in a skinner box or an arena so that [the correct frameAcquired function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/How-tos#list-of-necessary-files-and-scripts-to-start) will be chosen automatically. To start tracking for the first time, you need the default camera files ([defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat), [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt), and [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt)). Once you start it, the according information (camera settings and tracking parameters; see below) will be written into new files. Adjust the file names and directories in [myHardwareSetup: (11) memory mapped file(s) for gaze tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). You need to specify the serial numbers of your cameras here, so that the files are automatically named correctly. To get the video tracker started, you need to name your tacker object and call the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m), e.g.: vT = startPointGrey. This is a wrapper script to initialize the video tracker object and necessary configurations. It calls the [videoTracker script](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m), which runs the actual tracking. It creates the preview window and extracts the information you change in there, like the camera gain and shutter, to be used by the tracker later. It also writes the memory mapped files, which essentially include the information of where what was tracked in each of your frames.
Now you can open the preview window for one of your cameras by first stop tracking with that camera
```matlab
vT{i}.stop % i specifies the camera number
```
and then calling the preview window.
```matlab
vT{i}.preview
```
You can only open the preview for one camera at a time! In the preview you will see your camera image twice – on the left side there is the raw version of it, the right side is the thresholded version. Now you can adjust different settings. First, specify whether you want to track a black object in front of a bright background (‘dark’) or a bright object, e.g. a reflector, in front of a dark background (‘white’) or both simultaneously, e.g. a black bird in the arena with an reflector attached ('both').
**Mask** **/** **Region of Interest**
The region of interest of your tracking is specified by your mask. To get started and / or it if you want to use the whole camera image you can select ‘noMask’. Mask needs to be a logical matrix, which is predefined with 'false'. The region that should be used for tracking is specified by setting the respective sections of the matrix to 'true'. The name of the logical itself must be ‘mask’, the name of the file must be one of the predefined mask names (in [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)). You can also change the names the mask needs to have in the code, here you need to make sure to change the names for loading as well as for saving, which are two different sections in the code. The mask is applied to the camera image (each element of the matrix corresponds to one pixel of the camera image). Only those pixels that are set to ‘true’ are pixels in which something will be tracked.
**Image thresholding**
As mentioned earlier, the figure on the right side in the preview window contains the tresholded version of the camera image. This means, that here you should see your target object either in black on a white background (target color: black), in white on a black background (target color: white), or both (target color: both). Per default, the initial target color is set to 'black', thus you will see only the black target displayed on a white background. Your target object is represented as 'blob' and its detection is based on a [blob analysis in Matlab](https://de.mathworks.com/help/vision/ref/blobanalysis.html). It is a rectangle by default, but its form can be changed. By changing the value of the Blob Minimum, you can change the size of it in pixel. For tracking in a skinner box, the blob minimum should be set to 1. Here, locating the blob is done via finding the median of all thresholded pixels, and not an ‘actual’ blob analysis. To track a target within your camera image, you need a sufficient contrast between the blob and its surroundings. The camera image contains the RGB information of each pixel, which is translated to only contain the information ‘something is here’ or ‘nothing is here’. This is done by adding together the RGB value and comparing this value to the value specified at Threshold. When tracking a black and white target simultaneously you can specify a 'Threshold' and 'Blob minimum' for both targets individually. Since RGB is an additive color space, a white object is tracked if the Threshold is lower than the sum of the RGB values, while a black object is tracked if the Threshold is higher than the sum of the RGB values. This means if you have trouble tracking your object, you should reduce the value specified at Threshold for a white object and make it bigger for a black object. You should do it the other way around if your problem is not that you cannot track your object, but that you tack too much of its surroundings. If you want to track a less distinct target, e.g. a grey pigeon, you might not be able to exclude some parts of the arena from also being tracked. You can circumvent this problem by adjusting the blob minimum or by removing the distracting elements of the arena using a custom mask. In addition to that, you should also adjust the individual camera properties such as brightness, gain, gamma, and shutter as well. Especially useful for removing shadow artifacts is adjusting the value of gain. To save and use your settings, close the preview by selecting the button ‘Stop Preview’. Since you stopped tracking for each of your cameras, you now have to close them (vT{i}.destroy) and restart tracking.
**Important tracking commands**
To start tracking, you need to create a video tracker object by calling the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m)
```matlab
vT = startPointGrey
```
Once you started it, you can open the preview for one of your cameras at a time. To do that you first need to stop the tracking
```matlab
vT{i}.stop
vT{i}.preview
```
If you changed values in the preview, its best to destroy each camera and restart the tracking
```matlab
vT{i}.destroy
```
**What are these files with the camera serial info and what do they do?**
- Camera configuration files (either [defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat) or serialNumber_cfg) :arrow_right: Configuration of ROI, brightness, blob values etc.
- Camera data document (either [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt) or serialNumber_dat) :arrow_right: memory mapped file, used to save tracking data
- Camera control files (either [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt) or serialNumber_ctrl) :arrow_right: memory mapped file, used to save event codes
There are supposed to be three files per camera that have each cameras’ serial number in the name (four if you write the binary). You can change the name of the binary by giving a filename input to the [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). To start the tracking, you need the three default files (see above). When you start tacking, the files specifically for your camera settings will be written. To write the data document (_dat file) and the control file (_ctrl file) it is sufficient to just start the tracking, for the configuration matrix you have to open the preview. Be careful to not just close the preview but click the button ‘stop preview’ to save your current configurations into the configuration file. The files contain different types of information important to your tracking. The camera configuration files (cameraSerial _cfg.mat) contain all settings you make when you use the camera preview. To start tracking, you need some default parameters, saved in the default files. You can just copy it and rename it according to your cameras’ serial numbers and if you also adjusted the paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m), anything you change in the preview will be written into the right file. The remaining two files are both ‘memory mapped files’ and are overwritten with every new frame. One specifies the data send to the tracker ('..._ctrl', datIn), the other the other the results of the tracking/blob analysis ('_dat', datOut). The dat in file (cameraSerial_ctrl.txt) contains four values: **1st** is true per default, if set to -1 the object is destroyed and acquisition stops, **2nd** is -1 per default, if set to true, raw data will be written into binary, **3 & 4** represent event codes (decimal + timepoint per event). The dat out file (cameraSerial_dat.txt) contains eight values:
1. x coordinate of the black object
2. y coordinate of the black object,
3. angle of the black object,
4. x coordinate of the white object,
5. y coordinate of the white object,
6. angle of the white object,
7. number of frame,
8. time
The first six values are predefined with 'NaN', which will only be replaced if a corresponding target was tracked. The last file represents a binary file (cameraSerial.txt), for which a custom filename can be specified as additional input argument in the call to the [videoTracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). It contains information from the memory mapped files (tracked positions and event codes). Data will only be saved in this bin file if the user sets the second argument within the 'datIn' file to 1 (true). Make sure that you are actually tracking something. A common error message: “insufficient physical memory”: probably due to too high frame rate :arrow_right: set frame rate in flyCapture or in [myHardwareSetup (11) memory mapped files for gaze tacker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) SETUP.tracker.frameRate, 75 fps for beak tracking (box) or 50 fps for position tracking (arena).
**Conversion matrices**
Conversion matrices are necessary for using the tracking in a skinner box. The output generated by the camera tracking are the coordinates of the tracked object in each of the cameras. The [keyBuffer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/User-or-animal-response#keybuffer-animal) function requires pixels on the screen. The conversionMatrixCreator scripts can be used to generate a matrix that can transform the camera coordinates into pixels on the screen. To optimally calibrate the tracking, you can either use the [version for birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) – then a bird is supposed to peck the dot on the screen – or the [version for humans](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m), where you can do the pecking. If you use the bird version of the calibrator, you still have to do a manual round of pecking first, so that there will be pre-defined peck fields around the dot. This is done to prevent artifacts in the calibration due to the bird not pecking on the dot, but somewhere else entirely. The calibration will then be done using the data generated by the bird. You can specify the distances and the area in which the dots appear on the screen. After you pecked on all the dots, you need to execute the following sections of the script, where the camera coordinates are transformed and the missing pixels are added via calculation. Start by executing only the second section, as this will also plot your camera values. This can be used as a check of the quality of your calibration. One of the curves in the plot should relatively steadily increase (with plateaus) while the other should oscillate. Then, run the rest of the script and remember to change the names of the matrices in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m).