@@ -291,6 +291,8 @@ If you changed values in the preview, its best to destroy each camera and restar
...
@@ -291,6 +291,8 @@ If you changed values in the preview, its best to destroy each camera and restar
vT{i}.destroy
vT{i}.destroy
```
```
When you have adjusted everything and are ready to start an experimental paradigm, you need to run the tracking and the paradigm in to different Matlab instances.
**What are these files with the camera serial info and what do they do?**
**What are these files with the camera serial info and what do they do?**
- Camera configuration files (either [defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat) or serialNumber_cfg) :arrow_right: Configuration of ROI, brightness, blob values etc.
- Camera configuration files (either [defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat) or serialNumber_cfg) :arrow_right: Configuration of ROI, brightness, blob values etc.
- Camera data document (either [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt) or serialNumber_dat) :arrow_right: memory mapped file, used to save tracking data
- Camera data document (either [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt) or serialNumber_dat) :arrow_right: memory mapped file, used to save tracking data
...
@@ -309,4 +311,4 @@ There are supposed to be three files per camera that have each cameras’ serial
...
@@ -309,4 +311,4 @@ There are supposed to be three files per camera that have each cameras’ serial
The first six values are predefined with 'NaN', which will only be replaced if a corresponding target was tracked. The last file represents a binary file (cameraSerial.txt), for which a custom filename can be specified as additional input argument in the call to the [videoTracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). It contains information from the memory mapped files (tracked positions and event codes). Data will only be saved in this bin file if the user sets the second argument within the 'datIn' file to 1 (true). Make sure that you are actually tracking something. A common error message: “insufficient physical memory”: probably due to too high frame rate :arrow_right: set frame rate in flyCapture or in [myHardwareSetup (11) memory mapped files for gaze tacker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) SETUP.tracker.frameRate, 75 fps for beak tracking (box) or 50 fps for position tracking (arena).
The first six values are predefined with 'NaN', which will only be replaced if a corresponding target was tracked. The last file represents a binary file (cameraSerial.txt), for which a custom filename can be specified as additional input argument in the call to the [videoTracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). It contains information from the memory mapped files (tracked positions and event codes). Data will only be saved in this bin file if the user sets the second argument within the 'datIn' file to 1 (true). Make sure that you are actually tracking something. A common error message: “insufficient physical memory”: probably due to too high frame rate :arrow_right: set frame rate in flyCapture or in [myHardwareSetup (11) memory mapped files for gaze tacker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) SETUP.tracker.frameRate, 75 fps for beak tracking (box) or 50 fps for position tracking (arena).
**Conversion matrices**
**Conversion matrices**
Conversion matrices are necessary for using the tracking in a skinner box. The output generated by the camera tracking are the coordinates of the tracked object in each of the cameras. The [keyBuffer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/User-or-animal-response#keybuffer-animal) function requires pixels on the screen. The conversionMatrixCreator scripts can be used to generate a matrix that can transform the camera coordinates into pixels on the screen. To optimally calibrate the tracking, you can either use the [version for birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) – then a bird is supposed to peck the dot on the screen – or the [version for humans](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m), where you can do the pecking. If you use the bird version of the calibrator, you still have to do a manual round of pecking first, so that there will be pre-defined peck fields around the dot. This is done to prevent artifacts in the calibration due to the bird not pecking on the dot, but somewhere else entirely. The calibration will then be done using the data generated by the bird. You can specify the distances and the area in which the dots appear on the screen. After you pecked on all the dots, you need to execute the following sections of the script, where the camera coordinates are transformed and the missing pixels are added via calculation. Start by executing only the second section, as this will also plot your camera values. This can be used as a check of the quality of your calibration. One of the curves in the plot should relatively steadily increase (with plateaus) while the other should oscillate. Then, run the rest of the script and remember to change the names of the matrices in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m).
Conversion matrices are necessary for using the tracking in a skinner box. The output generated by the camera tracking are the coordinates of the tracked object in each of the cameras. The [keyBuffer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/User-or-animal-response#keybuffer-animal) function requires pixels on the screen. The conversionMatrixCreator scripts can be used to generate a matrix that can transform the camera coordinates into pixels on the screen. To optimally calibrate the tracking, you can either use the [version for birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) – then a bird is supposed to peck the dot on the screen – or the [version for humans](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m), where you can do the pecking. If you use the bird version of the calibrator, you still have to do a manual round of pecking first, so that there will be pre-defined peck fields around the dot. This is done to prevent artifacts in the calibration due to the bird not pecking on the dot, but somewhere else entirely. The calibration will then be done using the data generated by the bird. The code to create the conversion matrices and the camera tracking have to run in two different Matlab instances, just like when running experiments. You can specify the distances and the area in which the dots appear on the screen. After you pecked on all the dots, you need to execute the following sections of the script, where the camera coordinates are transformed and the missing pixels are added via calculation. Start by executing only the second section, as this will also plot your camera values. This can be used as a check of the quality of your calibration. One of the curves in the plot should relatively steadily increase (with plateaus) while the other should oscillate. Then, run the rest of the script and remember to change the names of the matrices in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m).