Update How tos authored by Aylin Carmen Klarer's avatar Aylin Carmen Klarer
......@@ -262,7 +262,7 @@ has to be set to 1.
#### In-depth explanations
The camera provides a 640 x 512 pixel image (full screen). Since not all components of this camera image are necessary for tracking (it might contain walls or parts of the apparatus irrelevant for tracking), an additional mask can be used to specify the region of interest. This logical mask has the same size as the (fullscreen) camera image and defines the pixels used for tracking (any shape made of 'true' within a as 'false' predefined matrix can be used). Artifacts outside of the area of interest can be ignored via the mask. The [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) produces x, y coordinates and angle of an either black or white target object (or both). Tracking coordinates can then be used to reconstruct the path of an animal or the coordinates of a peck on a screen. For tracking with two cameras in a skinner box, only the x-values of each camera are processed further. On camera is set up so that it forms an x-axis relative to the experimental screen, the other to form the y-axis. For both, the relevant information is given by the x-value of the tracked point in the camera image. Those are converted to points on the screen.
The camera provides a 640 x 512 pixel image (full screen). Since not all components of this camera image are necessary for tracking (it might contain walls or parts of the apparatus irrelevant for tracking), an additional mask can be used to specify the region of interest. This logical mask has the same size as the (fullscreen) camera image and defines the pixels used for tracking (any shape made of 'true' within a as 'false' predefined matrix can be used). Artifacts outside of the area of interest can be ignored via the mask. The [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) produces x, y coordinates and angle of an either black or white target object (or both). Tracking coordinates can then be used to reconstruct the path of an animal or the coordinates of a peck on a screen. For tracking with two cameras in a skinner box, only the x-values of each camera are processed further. One camera is set up so that it forms an x-axis relative to the experimental screen, the other to form the y-axis. For both, the relevant information is given by the x-value of the tracked point in the camera image. Those are converted to points on the screen.
**Start Tracking**
In the very beginning, you should start by looking at the camera images in FlyCapture. You should configure each camera image until it is sharp and well lit. If nothing works you can manually adjust shutter and focus on the cameras themselves. Start MATLAB and make sure all necessary files and MATLAB-scripts are on your current path. Specify in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) if you are tracking in a skinner box or an arena so that [the correct frameAcquired function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/How-tos#list-of-necessary-files-and-scripts-to-start) will be chosen automatically. To start tracking for the first time, you need the default camera files ([defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat), [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt), and [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt)). Once you start it, the according information (camera settings and tracking parameters; see below) will be written into new files. Adjust the file names and directories in [myHardwareSetup: (11) memory mapped file(s) for gaze tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). You need to specify the serial numbers of your cameras here, so that the files are automatically named correctly. To get the video tracker started, you need to name your tacker object and call the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m), e.g.: vT = startPointGrey. This is a wrapper script to initialize the video tracker object and necessary configurations. It calls the [videoTracker script](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m), which runs the actual tracking. It creates the preview window and extracts the information you change in there, like the camera gain and shutter, to be used by the tracker later. It also writes the memory mapped files, which essentially include the information of where what was tracked in each of your frames.
......
......