Update How tos authored by Aylin Carmen Klarer's avatar Aylin Carmen Klarer
...@@ -283,7 +283,7 @@ You can only open the preview for one camera at a time! In the preview you will ...@@ -283,7 +283,7 @@ You can only open the preview for one camera at a time! In the preview you will
The region of interest of your tracking is specified by your mask. To get started and / or it if you want to use the whole camera image you can select ‘noMask’. Mask needs to be a logical matrix, which is predefined with 'false'. The region that should be used for tracking is specified by setting the respective sections of the matrix to 'true'. The name of the logical itself must be ‘mask’, the name of the file must be one of the predefined mask names (in [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)). You can also change the names the mask needs to have in the code, here you need to make sure to change the names for loading as well as for saving, which are two different sections in the code. The mask is applied to the camera image (each element of the matrix corresponds to one pixel of the camera image). Only those pixels that are set to ‘true’ are pixels in which something will be tracked. The region of interest of your tracking is specified by your mask. To get started and / or it if you want to use the whole camera image you can select ‘noMask’. Mask needs to be a logical matrix, which is predefined with 'false'. The region that should be used for tracking is specified by setting the respective sections of the matrix to 'true'. The name of the logical itself must be ‘mask’, the name of the file must be one of the predefined mask names (in [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)). You can also change the names the mask needs to have in the code, here you need to make sure to change the names for loading as well as for saving, which are two different sections in the code. The mask is applied to the camera image (each element of the matrix corresponds to one pixel of the camera image). Only those pixels that are set to ‘true’ are pixels in which something will be tracked.
**Image thresholding** **Image thresholding**
As mentioned earlier, the figure on the right side in the preview window contains the tresholded version of the camera image. This means, that here you should see your target object either in black on a white background (target color: black), in white on a black background (target color: white), or both (target color: both). Per default, the initial target color is set to 'black', thus you will see only the black target displayed on a white background. Your target object is represented as 'blob' and its detection is based on a [blob analysis in Matlab](https://de.mathworks.com/help/vision/ref/blobanalysis.html). It is a rectangle by default, but its form can be changed. By changing the value of the Blob Minimum, you can change the size of it in pixel. For tracking in a skinner box, the blob minimum should be set to 1. Here, locating the blob is done via finding the median of all thresholded pixels, and not an ‘actual’ blob analysis. To track a target within your camera image, you need a sufficient contrast between the blob and its surroundings. The camera image contains the RGB information of each pixel, which is translated to only contain the information ‘something is here’ or ‘nothing is here’. This is done by adding together the RGB value and comparing this value to the value specified at Threshold. When tracking a black and white target simultaneously you can specify a 'Threshold' and 'Blob minimum' for both targets individually. Since RGB is an additive color space, a white object is tracked if the Threshold is lower than the sum of the RGB values, while a black object is tracked if the Threshold is higher than the sum of the RGB values. This means if you have trouble tracking your object, you should reduce the value specified at Threshold for a white object and make it bigger for a black object. You should do it the other way around if your problem is not that you cannot track your object, but that you tack too much of its surroundings. If you want to track a less distinct target, e.g. a grey pigeon, you might not be able to exclude some parts of the arena from also being tracked. You can circumvent this problem by adjusting the blob minimum or by removing the distracting elements of the arena using a custom mask. In addition to that, you should also adjust the individual camera properties such as brightness, gain, gamma, and shutter as well. Especially useful for removing shadow artifacts is adjusting the value of gain. To save and use your settings, close the preview by selecting the button ‘Stop Preview’. Since you stopped tracking for each of your cameras, you now have to close them (vT{i}.destroy) and restart tracking. As mentioned earlier, the figure on the right side in the preview window contains the tresholded version of the camera image. This means, that here you should see your target object either in black on a white background (target color: black), in white on a black background (target color: white), or both (target color: both). Per default, the initial target color is set to 'black', thus you will see only the black target displayed on a white background. Your target object is represented as 'blob' and its detection is based on a [blob analysis in Matlab](https://de.mathworks.com/help/vision/ref/blobanalysis.html). It is a rectangle by default, but its form can be changed. By changing the value of the Blob Minimum, you can change the size of it in pixel. For tracking in a skinner box, the blob minimum should be set to 1. Here, locating the blob is done via finding the median of all thresholded pixels, and not an ‘actual’ blob analysis. To track a target within your camera image, you need a sufficient contrast between the blob and its surroundings. The camera image contains the RGB information of each pixel, which is translated to only contain the information ‘something is here’ or ‘nothing is here’. This is done by adding together the RGB value and comparing this value to the value specified at Threshold. When tracking a black and white target simultaneously you can specify a 'Threshold' and 'Blob minimum' for both targets individually. Since RGB is an additive color space, a white object is tracked if the Threshold is lower than the sum of the RGB values, while a black object is tracked if the Threshold is higher than the sum of the RGB values. This means if you have trouble tracking your object, you should reduce the value specified at Threshold for a white object and make it bigger for a black object. You should do it the other way around if your problem is not that you cannot track your object, but that you track too much of its surroundings. If you want to track a less distinct target, e.g. a grey pigeon, you might not be able to exclude some parts of the arena from also being tracked. You can circumvent this problem by adjusting the blob minimum or by removing the distracting elements of the arena using a custom mask. In addition to that, you should also adjust the individual camera properties such as brightness, gain, gamma, and shutter as well. Especially useful for removing shadow artifacts is adjusting the value of gain. To save and use your settings, close the preview by selecting the button ‘Stop Preview’. Since you stopped tracking for each of your cameras, you now have to close them (vT{i}.destroy) and restart tracking.
**Important tracking commands** **Important tracking commands**
To start tracking, you need to create a video tracker object by calling the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m) To start tracking, you need to create a video tracker object by calling the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m)
... ...
......