All functions of this toolbox involved in the presentation of stimuli on the screen (e.g. words and images) will display their results immediately. This means that everything the function is told to present will be presented on the screen right after the function is called (for advanced users: the 'Flip' command from the Psychophysics Toolbox is executed in every function call).
All functions of this toolbox involved in the presentation of stimuli on the screen (e.g. words and images) will display their results immediately. This means that everything the function is told to present will be presented on the screen right after the function is called (for advanced users: the 'Flip' command from the Psychophysics Toolbox is executed in every function call).
A problem here may occur when you want to combine different presenting functions, e.g. to present text and an image or different texts on different locations on the screen. That is why most presenting functions have the option to use the input argument 'dontFlip'. When this option is used, the stimulus presentation happens off-screen one after the other (for advanced users: everything will be saved to an internal buffer). As soon as a presenting function is called without the 'dontFlip' option everything that was stored internally will be displayed at once. This makes it possible to build up a presentation screen consisting of different input arguments. It also makes possible the combination of images and text. This process works pretty fast and you won’t have any problems of making several 'dontFlip' calls and displaying them all within the next frame.
A problem here may occur when you want to combine different presenting functions, e.g. to present text and an image or different texts on different locations on the screen. That is why most presenting functions have the option to use the input argument 'dontFlip'. When this option is used, the stimulus presentation happens off-screen one after the other (for advanced users: everything will be saved to an internal buffer). As soon as a presenting function is called without the 'dontFlip' option everything that was stored internally will be displayed at once. This makes it possible to build up a presentation screen consisting of different input arguments. It also makes possible the combination of images and text. This process works pretty fast and you won’t have any problems of making several 'dontFlip' calls and displaying them all within the next frame.
The following chapter will explain how to use the OTBR Toolbox on a Raspberry Pi as a local computer or how to connect from a local computer (PC or Raspberry) to a remote computer (PC or Raspberry).
The following chapter will explain how to use the OTBR Toolbox on a Raspberry Pi as a local computer or how to connect from a local computer (PC or Raspberry) to a remote computer (PC or Raspberry).
The Raspberry Pi is an affordable single board computer that is powerful enough to run a Linux based operating system on it and an open source alternative to MATLAB called Octave. We designed the OTBR Toolbox to be fully compatible with Octave and the Linux operating system on a Raspberry Pi 2 and 3.
The Raspberry Pi is an affordable single board computer that is powerful enough to run a Linux based operating system on it and an open source alternative to MATLAB called Octave. We designed the OTBR Toolbox to be fully compatible with Octave and the Linux operating system on a Raspberry Pi 2 and 3.
To use our Toolbox on a Raspberry Pi all you have to do is to download the image of the Raspberry Pi and write it to a compatible SD card with at least 16GB. After this step is done connect the SD card to the Raspberry Pi and boot it. Everything is set up already and you can start immediately using our toolbox.
To use our Toolbox on a Raspberry Pi all you have to do is to download the image of the Raspberry Pi and write it to a compatible SD card with at least 16GB. After this step is done connect the SD card to the Raspberry Pi and boot it. Everything is set up already and you can start immediately using our toolbox.
To update the OTBR or the psychophysics toolbox manually you can download the newest versions and replace or add them in the folder XXXX/XXXX/XXX/XXX. All steps presented in chapter [1.1](/Getting-started#installation-and-setup) and [1.2](/Getting-started#setup-an-experiment) remain the same.
To update the OTBR or the psychophysics toolbox manually you can download the newest versions and replace or add them in the folder XXXX/XXXX/XXX/XXX. All steps presented in chapter [1.1](/Getting-started#installation-and-setup) and [1.2](/Getting-started#setup-an-experiment) remain the same.
## Using a Raspberry Pi as remote computer
## Using a Raspberry Pi as remote computer
If you want to use a Raspberry as a remote device, you are able to do that without any problem. The OTBR Toolbox helps you with some easy steps to control one or more Raspberry Pi computers remotely. This allows you to
If you want to use a Raspberry as a remote device, you are able to do that without any problem. The OTBR Toolbox helps you with some easy steps to control one or more Raspberry Pi computers remotely. This allows you to
- control several Skinner Boxes, assuming that a Raspberry Pi controls a Skinner Box
- control several Skinner Boxes, assuming that a Raspberry Pi controls a Skinner Box
- run complex experiments with multiple screens or other stimulus presentation devices, assuming that each Raspberry Pi is connected to a screen or is controlling a stimulus device with its IO pins
- run complex experiments with multiple screens or other stimulus presentation devices, assuming that each Raspberry Pi is connected to a screen or is controlling a stimulus device with its IO pins
The first step is similar to the procedure described above: download the image file for the Raspberry Pi and flash an SD card. After the Raspberry is up and running modify one file to change the behavior of the Raspberry:
The first step is similar to the procedure described above: download the image file for the Raspberry Pi and flash an SD card. After the Raspberry is up and running modify one file to change the behavior of the Raspberry:
Modify the Octave internal file ‘/home/pi/.octaverc’. This file is called when Octave starts. You can open it with a simple text editor and enter matlab code at the end of the file. This code will then be called directly after starting Octave and before any user input can be made. Use this option to call [initOTBR](/Initialization#initotbr) in this file followed by [connect2ControlComputer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/Network/ExperimentalComputer/connect2ControlComputer.m)(see[section 14.1.](/Network-remote-computer#connect2controlcomputer)). This will initialize the toolbox on the raspberry system. With this changes the Raspberry will now wait for input from the control computer after a reboot.
Modify the Octave internal file ‘/home/pi/.octaverc’. This file is called when Octave starts. You can open it with a simple text editor and enter matlab code at the end of the file. This code will then be called directly after starting Octave and before any user input can be made. Use this option to call [initOTBR](/Initialization#initotbr) in this file followed by [connect2ControlComputer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/Network/ExperimentalComputer/connect2ControlComputer.m)(see[section 14.1.](/Network-remote-computer#connect2controlcomputer)). This will initialize the toolbox on the raspberry system. With this changes the Raspberry will now wait for input from the control computer after a reboot.
Before rebooting the Raspberry Pi make sure to check the IP address of the Raspberry. Therefore we created a Link on the Desktop “Change IP address here” that helps you to modify the IP address if necessary.
Before rebooting the Raspberry Pi make sure to check the IP address of the Raspberry. Therefore we created a Link on the Desktop “Change IP address here” that helps you to modify the IP address if necessary.
All other steps can now be performed on the control computer. First make sure to enter all remote host IP addresses in section 10 of [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). In case you use only one Raspberry Pi of course only one IP address is needed. In your experimental script you now need to call [connect2Host](/Network-control-computer#connect2host) to establish a network communication to all remote host (Raspberry Pi) that are defined in [myHardwareSetup.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m).
All other steps can now be performed on the control computer. First make sure to enter all remote host IP addresses in section 10 of [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). In case you use only one Raspberry Pi of course only one IP address is needed. In your experimental script you now need to call [connect2Host](/Network-control-computer#connect2host) to establish a network communication to all remote host (Raspberry Pi) that are defined in [myHardwareSetup.m](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m).
To run an actual experiment the Raspberry system needs the experimental files first. That includes stimuli, commands and a place to put results. Send your entire experimental folder by using the [sendExperiment2Host](/Network-control-computer#sendexperiment2host) command.
To run an actual experiment the Raspberry system needs the experimental files first. That includes stimuli, commands and a place to put results. Send your entire experimental folder by using the [sendExperiment2Host](/Network-control-computer#sendexperiment2host) command.
The next thing you want to do is to send a command to the remote host. Call [sendCommand2Host](/Network-control-computer#sendcommand2host) for this. This function enables you to send any matlab command you like to the host via the established communication. Be aware that the commands need to be entered as strings. The function [printCmd](/Network-control-computer#printcmd) will do this for you. Use the output from [printCmd](/Network-control-computer#printcmd) as input for [sendCommand2Host](/Network-control-computer#sendcommand2host). E.g. sendCommand2Host(_printCmd([], 'keyBuffer', 10, 'goodkey', (1:3), 'badkey', 4)_);
The next thing you want to do is to send a command to the remote host. Call [sendCommand2Host](/Network-control-computer#sendcommand2host) for this. This function enables you to send any matlab command you like to the host via the established communication. Be aware that the commands need to be entered as strings. The function [printCmd](/Network-control-computer#printcmd) will do this for you. Use the output from [printCmd](/Network-control-computer#printcmd) as input for [sendCommand2Host](/Network-control-computer#sendcommand2host). E.g. sendCommand2Host(_printCmd([], 'keyBuffer', 10, 'goodkey', (1:3), 'badkey', 4)_);
During the experiment your [keyBuffer](/User-or-animal-response#keybuffer-animal) results will now be saved as variables on the remote host (Raspberry Pi). Do not forget to call [getKeyBufferResults](/Network-control-computer#getkeybufferresult) on your control computer to transfer these results, otherwise you will not be able to save it. After finishing an experiment and transferring your results always call [disconnectFromHost](/Network-control-computer#disconnectfromhost)(as implied in [1.4. Function Tree](/Getting-started#function-tree)) to shut down the connection as well as the remote host. After disconnecting from a remote host we recommend to reboot the Raspberry before starting a new experiment. Call disconnectFromHost('reboot') to reboot and consider to restart MATLAB or Octave on your local computer.
During the experiment your [keyBuffer](/User-or-animal-response#keybuffer-animal) results will now be saved as variables on the remote host (Raspberry Pi). Do not forget to call [getKeyBufferResults](/Network-control-computer#getkeybufferresult) on your control computer to transfer these results, otherwise you will not be able to save it. After finishing an experiment and transferring your results always call [disconnectFromHost](/Network-control-computer#disconnectfromhost)(as implied in [1.4. Function Tree](/Getting-started#function-tree)) to shut down the connection as well as the remote host. After disconnecting from a remote host we recommend to reboot the Raspberry before starting a new experiment. Call disconnectFromHost('reboot') to reboot and consider to restart MATLAB or Octave on your local computer.
You can include all of the commands mentioned above in your experimental script on your control computer. When everything is set up you don’t need to do more than running this scrip to conduct an experiment.
You can include all of the commands mentioned above in your experimental script on your control computer. When everything is set up you don’t need to do more than running this scrip to conduct an experiment.
## How to flash a SD card for the OTBR Toolbox
## How to flash a SD card for the OTBR Toolbox
If you want to use a Raspberry PI with the OTBR toolbox you need to copy a suitable operating system and the OTBR Toolbox including the Psychophysics toolbox to the SD card. The easiest way to do this is by using a tool “Win32DiskManager”. All you need is a valid system image for the OTBR toolbox, the disk manager and a 16GB SSD card (bigger cards are supported as well).
If you want to use a Raspberry PI with the OTBR toolbox you need to copy a suitable operating system and the OTBR Toolbox including the Psychophysics toolbox to the SD card. The easiest way to do this is by using a tool “Win32DiskManager”. All you need is a valid system image for the OTBR toolbox, the disk manager and a 16GB SSD card (bigger cards are supported as well).
To start the program “Win32DiskManager” you need administrative rights, since it can also flash hard drives and other devices on your computer! Please be careful and double-check all your settings to avoid data loss!
To start the program “Win32DiskManager” you need administrative rights, since it can also flash hard drives and other devices on your computer! Please be careful and double-check all your settings to avoid data loss!
• Select the image file for the OTBR toolbox that has to be located on your compute or an available network share. The file extension is *.img
• Select the image file for the OTBR toolbox that has to be located on your compute or an available network share. The file extension is *.img
• Make sure to select the correct drive letter! A wrong drive letter will delete the data on the selected device, even if it is an external hard drive, USB stick or SD card.
• Make sure to select the correct drive letter! A wrong drive letter will delete the data on the selected device, even if it is an external hard drive, USB stick or SD card.
• Take your time and double check if the drive letter corresponds to the letter of the SD card!
• Take your time and double check if the drive letter corresponds to the letter of the SD card!
• After checking that the drive letter is correct press the button “Write”/”Schreiben” to start the copy process from the image file to the SD card.
• After checking that the drive letter is correct press the button “Write”/”Schreiben” to start the copy process from the image file to the SD card.
• Once the copy process stopped use the “Very Only” button to verify that the process was successful.
• Once the copy process stopped use the “Very Only” button to verify that the process was successful.
• Remove the SD card from your computer and put it in your Raspberry Pi
• Remove the SD card from your computer and put it in your Raspberry Pi
• After booting make sure to copy the latest version of the OTBR toolbox on the raspberry
• After booting make sure to copy the latest version of the OTBR toolbox on the raspberry
## Updating the OTBR Toolbox and the Psychophysics Toolbox
## Updating the OTBR Toolbox and the Psychophysics Toolbox
• After booting the Raspberry PI for the first time octave starts in a window without a graphical user interface. Close this window by typing exit in Octave and open the “File Manager” (identical to the file explorer on windows).
• After booting the Raspberry PI for the first time octave starts in a window without a graphical user interface. Close this window by typing exit in Octave and open the “File Manager” (identical to the file explorer on windows).
• Browse to the folder /home/pi/Octave/Toolbox
• Browse to the folder /home/pi/Octave/Toolbox
• In this folders are two sub folders: “OTBRToolbox” and “PsychToolbox”
• In this folders are two sub folders: “OTBRToolbox” and “PsychToolbox”
• To update the toolboxes simply replace one of these folders with a new version, but keep the name of the folders. **It is good practice to rename the old versions instead of deleting it!**
• To update the toolboxes simply replace one of these folders with a new version, but keep the name of the folders. **It is good practice to rename the old versions instead of deleting it!**
Doing this it is easy to go back to old versions if needed
Doing this it is easy to go back to old versions if needed
• After copying the new files the Raspberry Pi is ready for your experiments
• After copying the new files the Raspberry Pi is ready for your experiments
## Configure OTBR Toolbox and Raspberry for use in a OTBR Network setup
## Configure OTBR Toolbox and Raspberry for use in a OTBR Network setup
• If the OTBR Toolbox is used in a [network setup](/How-tos#using-a-raspberry-pi-as-local-or-remote-computer)(Raspberry Pi or computer executes code that is send via network)
• If the OTBR Toolbox is used in a [network setup](/How-tos#using-a-raspberry-pi-as-local-or-remote-computer)(Raspberry Pi or computer executes code that is send via network)
• Open file: /home/pi/.octaverc by double clicking and add the command “connect2ControlComputer” at the very end of the file.
• Open file: /home/pi/.octaverc by double clicking and add the command “connect2ControlComputer” at the very end of the file.
This makes sure that Octave starts and calls the function [connect2ControlComputer](/Network-remote-computer#connect2controlcomputer) after the initialization is done
This makes sure that Octave starts and calls the function [connect2ControlComputer](/Network-remote-computer#connect2controlcomputer) after the initialization is done
• Now we have to configure the IP address of your Raspberry Pi to a new address. Therefore double click the icon changeIPAdressHere on your desktop
• Now we have to configure the IP address of your Raspberry Pi to a new address. Therefore double click the icon changeIPAdressHere on your desktop
• Scroll down and change the line: “static ip_address=192.168.0.50/24” for the IP address 192.168.0.50 if you want to use a different IP address
• Scroll down and change the line: “static ip_address=192.168.0.50/24” for the IP address 192.168.0.50 if you want to use a different IP address
• Reboot your raspberry Pi and after your Raspberry is up and running it expects command from a control computer
• Reboot your raspberry Pi and after your Raspberry is up and running it expects command from a control computer
## Coding an MRI experiment
## Coding an MRI experiment
This toolbox comprises the option to communicate between an MRI scanner and an experimental computer. To use this option you need to connect the devices via parallel port and call the functions described below in the correct order.
This toolbox comprises the option to communicate between an MRI scanner and an experimental computer. To use this option you need to connect the devices via parallel port and call the functions described below in the correct order.
At first you need to initialize basic functions. Therefore you call [initMRIChatPart1](/MRI-or-trigger#initmrichatpart1). This will start the program triggerCounter2_MCR9-2.exe, which organizes the communication with the parallel port. The program will open an own window displaying user information and the number of triggers sent, their timing and the intervals between scans. After the experiment is done and the window is closed properly the trigger information will be saved to a specific file using the date and time of the experiment as name. The application can be stopped by pressing escape.
At first you need to initialize basic functions. Therefore you call [initMRIChatPart1](/MRI-or-trigger#initmrichatpart1). This will start the program triggerCounter2_MCR9-2.exe, which organizes the communication with the parallel port. The program will open an own window displaying user information and the number of triggers sent, their timing and the intervals between scans. After the experiment is done and the window is closed properly the trigger information will be saved to a specific file using the date and time of the experiment as name. The application can be stopped by pressing escape.
Before continuing you need to open a presentation window ([initWindow](/Stimuli-Display#initwindow)), this is mandatory for the following functions. After you initialized the presentation window you can call [initMRIChatPart2](/MRI-or-trigger#initmrichatpart2). Here you define the number of scans you want to wait before actually starting the experiment. User information about the overall process will appear on the presentation window automatically.
Before continuing you need to open a presentation window ([initWindow](/Stimuli-Display#initwindow)), this is mandatory for the following functions. After you initialized the presentation window you can call [initMRIChatPart2](/MRI-or-trigger#initmrichatpart2). Here you define the number of scans you want to wait before actually starting the experiment. User information about the overall process will appear on the presentation window automatically.
Lastly if you have to send triggers from the experimental computer to external devices you have to use the function [sendTriggerMRI](/MRI-or-trigger#sendtriggermri). During an MRI experiment you will not be able to use [parChat](/MRI-or-trigger#parchat) or [parChatWord](/MRI-or-trigger#parchatword) for trigger events. That is because these functions directly connect to the parallel port, which in this case is being occupied by the program triggerCounter2. In contrast [sendTriggerMRI](/MRI-or-trigger#sendtriggermri) uses the triggerCounter2 application to send triggers via the parallel port.
Lastly if you have to send triggers from the experimental computer to external devices you have to use the function [sendTriggerMRI](/MRI-or-trigger#sendtriggermri). During an MRI experiment you will not be able to use [parChat](/MRI-or-trigger#parchat) or [parChatWord](/MRI-or-trigger#parchatword) for trigger events. That is because these functions directly connect to the parallel port, which in this case is being occupied by the program triggerCounter2. In contrast [sendTriggerMRI](/MRI-or-trigger#sendtriggermri) uses the triggerCounter2 application to send triggers via the parallel port.
The following example provides a very simple MRI experiment that contains all features necessary, as well as the use of [sendTriggerMRI](/MRI-or-trigger#sendtriggermri) connected to the display of user information. You can use it as baseline for your own experiments.
The following example provides a very simple MRI experiment that contains all features necessary, as well as the use of [sendTriggerMRI](/MRI-or-trigger#sendtriggermri) connected to the display of user information. You can use it as baseline for your own experiments.
~~~matlab
~~~matlab
%% Example for Scanner
%% Example for Scanner
% trigger signal must be set to pin 10 at the parallel port!
% trigger signal must be set to pin 10 at the parallel port!
When your experiment focuses on a specific time course it may be helpful to mark specific events, such as a stimulus display, with certain event codes, for example triggers. Especially in EEG and MRI experiments it can be very helpful for connecting recordings to behavioral responses. The OTBR toolbox can send triggers from e.g. an experimental computer to a control computer using a parallel port or other supported IO devices. Functions that are capable of using the parallel port directly are [sendTriggerMRI](/MRI-or-trigger#sendtriggermri), [parChat](/MRI-or-trigger#parchat) and [parChatWord](/MRI-or-trigger#parchatword). The first one has been explained in the previous chapter. The other two can be used for example during EEG or other behavioral experiments that use external devices.
When your experiment focuses on a specific time course it may be helpful to mark specific events, such as a stimulus display, with certain event codes, for example triggers. Especially in EEG and MRI experiments it can be very helpful for connecting recordings to behavioral responses. The OTBR toolbox can send triggers from e.g. an experimental computer to a control computer using a parallel port or other supported IO devices. Functions that are capable of using the parallel port directly are [sendTriggerMRI](/MRI-or-trigger#sendtriggermri), [parChat](/MRI-or-trigger#parchat) and [parChatWord](/MRI-or-trigger#parchatword). The first one has been explained in the previous chapter. The other two can be used for example during EEG or other behavioral experiments that use external devices.
The function [parChat](/MRI-or-trigger#parchat) can activate or deactivate single pins of the parallel port, which can be used as trigger. The function requires one or two input arguments. When two arguments are given the function can be used to activate or deactivate pin 2-9 of the parallel port. The first argument defines the pin (‘1’=pin2, ‘2’=pin3, etc.) the second argument defines the status (on/off). If only one argument is given it can be used to listen to pin 10-15 of the parallel port (‘1’=pin10, ‘2’=pin11, etc.). The following list shows all possible outcomes and the necessary arguments to achieve them.
The function [parChat](/MRI-or-trigger#parchat) can activate or deactivate single pins of the parallel port, which can be used as trigger. The function requires one or two input arguments. When two arguments are given the function can be used to activate or deactivate pin 2-9 of the parallel port. The first argument defines the pin (‘1’=pin2, ‘2’=pin3, etc.) the second argument defines the status (on/off). If only one argument is given it can be used to listen to pin 10-15 of the parallel port (‘1’=pin10, ‘2’=pin11, etc.). The following list shows all possible outcomes and the necessary arguments to achieve them.
In contrast to [parChat](/MRI-or-trigger#parchat) the function [parChatWord](/MRI-or-trigger#parchatword) has the advantage to activate several pins at once. Therefor it uses all 8 bits of the data port (pin 2-9). It needs only one input argument, which is the decimal value of an array representing a binary number. Using this option takes several steps:
In contrast to [parChat](/MRI-or-trigger#parchat) the function [parChatWord](/MRI-or-trigger#parchatword) has the advantage to activate several pins at once. Therefor it uses all 8 bits of the data port (pin 2-9). It needs only one input argument, which is the decimal value of an array representing a binary number. Using this option takes several steps:
First you should think about which pin you want to activate and convert this into an array with 8 digits.
First you should think about which pin you want to activate and convert this into an array with 8 digits.
**Example**
**Example**
>You want to activate pin 8, 6 and 2 at the same time. Converted to an 8 digit array it would look like this: '01010001'. The first digit represents pin 9, the second pin 8 etc. and the last digit represents pin 2. That means the numerical order of the pins is inverted.
>You want to activate pin 8, 6 and 2 at the same time. Converted to an 8 digit array it would look like this: '01010001'. The first digit represents pin 9, the second pin 8 etc. and the last digit represents pin 2. That means the numerical order of the pins is inverted.
Next you have to convert your array into a decimal value. You can use the matlab internal function [bin2dec](https://de.mathworks.com/help/matlab/ref/bin2dec.html?s_tid=doc_ta) for this.
Next you have to convert your array into a decimal value. You can use the matlab internal function [bin2dec](https://de.mathworks.com/help/matlab/ref/bin2dec.html?s_tid=doc_ta) for this.
**Example**
**Example**
~~~matlab
~~~matlab
bin2dec('01010001')
bin2dec('01010001')
ans=81
ans=81
~~~
~~~
This also works the opposite way, therefore you need to use [dec2bin](https://de.mathworks.com/help/matlab/ref/dec2bin.html).
This also works the opposite way, therefore you need to use [dec2bin](https://de.mathworks.com/help/matlab/ref/dec2bin.html).
**Example**
**Example**
~~~matlab
~~~matlab
dec2bin(81,8)
dec2bin(81,8)
ans='01010001'
ans='01010001'
% the first input argument in the decimal value, the second argument is the minimal amount of digits
% the first input argument in the decimal value, the second argument is the minimal amount of digits
~~~
~~~
Now you can use this decimal number as input for [parChatWord](/MRI-or-trigger#parchatword).
Now you can use this decimal number as input for [parChatWord](/MRI-or-trigger#parchatword).
**Example**
**Example**
~~~matlab
~~~matlab
parChatWord(81)
parChatWord(81)
~~~
~~~
:arrow_right: activate pin 2, 6 and 8
:arrow_right: activate pin 2, 6 and 8
Finally, you can deactivate all pins at once using '0' as input argument.
Finally, you can deactivate all pins at once using '0' as input argument.
**Example**
**Example**
~~~matlab
~~~matlab
parChatWord(0)
parChatWord(0)
~~~
~~~
:arrow_right: deactivate all pins (2-9)
:arrow_right: deactivate all pins (2-9)
Keep in mind that every option mentioned in this chapter needs a valid connection via parallel port!
Keep in mind that every option mentioned in this chapter needs a valid connection via parallel port!
## Tracking
## Tracking
#### List of necessary files and scripts to start
#### List of necessary files and scripts to start
-[default camera configuration file](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat)
-[default camera configuration file](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/Tracking/defaultCam_cfg.mat)
:arrow_right: most basic files, it’s basically everything that is in [the folder](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/tree/Feature_CameraTracking/Tracking) as well
:arrow_right: most basic files, it’s basically everything that is in [the folder](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/tree/master/Tracking) as well
#### Start Tracking Step by Step – Skinner Box
#### Start Tracking Step by Step – Skinner Box
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
2. get necessary add-ons for MATLAB
2. get necessary add-ons for MATLAB
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 75
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 75
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
6. define your mask and set mask to custom in preview (or use one of the default masks)
6. define your mask and set mask to custom in preview (or use one of the default masks)
7. use a conversion matrix creator to calibrate your tracking (only necessary in Skinner Box)
7. use a conversion matrix creator to calibrate your tracking (only necessary in Skinner Box)
In a Skinner Box, a tracked peck on the screen is treated just as a peck on a touchscreen would be. Therefore, in [myHardwareSetup: (6) TOUCH SCREEN SETTINGS](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
In a Skinner Box, a tracked peck on the screen is treated just as a peck on a touchscreen would be. Therefore, in [myHardwareSetup: (6) TOUCH SCREEN SETTINGS](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
```matlab
```matlab
SETUP.touchScreen.on=0;
SETUP.touchScreen.on=0;
```
```
has to be set to 1.
has to be set to 1.
#### Start Tracking Step by Step – Arena
#### Start Tracking Step by Step – Arena
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
1. set up hardware: cameras and if needed infrared lights (to better see the reflector used for tracking and to increase contrast between beak and surrounding)
2. get necessary add-ons for MATLAB
2. get necessary add-ons for MATLAB
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 50
3. start the FlyCapture App and adjust the settings until the picture is sharp and well lit, if nothing works manually adjust shutter and focus on the cameras, set frame rate to 50
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
4. open MATLAB and adjust the file names and paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m)
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
5. start tracking and adjust values in preview function, which will be automatically saved in the configuration files
6. define your mask and set mask to custom in preview (or use one of the default masks)
6. define your mask and set mask to custom in preview (or use one of the default masks)
#### In-depth explanations
#### In-depth explanations
The camera provides a 640 x 512 pixel image (full screen). Since not all components of this camera image are necessary for tracking (it might contain walls or parts of the apparatus irrelevant for tracking), an additional mask can be used to specify the region of interest. This logical mask has the same size as the (fullscreen) camera image and defines the pixels used for tracking (any shape made of 'true' within a as 'false' predefined matrix can be used). Artifacts outside of the area of interest can be ignored via the mask. The [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) produces x, y coordinates and angle of an either black or white target object (or both). Tracking coordinates can then be used to reconstruct the path of an animal or the coordinates of a peck on a screen. For tracking with two cameras in a skinner box, only the x-values of each camera are processed further. One camera is set up so that it forms an x-axis relative to the experimental screen, the other to form the y-axis. For both, the relevant information is given by the x-value of the tracked point in the camera image. Those are converted to points on the screen.
The camera provides a 640 x 512 pixel image (full screen). Since not all components of this camera image are necessary for tracking (it might contain walls or parts of the apparatus irrelevant for tracking), an additional mask can be used to specify the region of interest. This logical mask has the same size as the (fullscreen) camera image and defines the pixels used for tracking (any shape made of 'true' within a as 'false' predefined matrix can be used). Artifacts outside of the area of interest can be ignored via the mask. The [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) produces x, y coordinates and angle of an either black or white target object (or both). Tracking coordinates can then be used to reconstruct the path of an animal or the coordinates of a peck on a screen. For tracking with two cameras in a skinner box, only the x-values of each camera are processed further. One camera is set up so that it forms an x-axis relative to the experimental screen, the other to form the y-axis. For both, the relevant information is given by the x-value of the tracked point in the camera image. Those are converted to points on the screen.
**Start Tracking**
**Start Tracking**
In the very beginning, you should start by looking at the camera images in FlyCapture. You should configure each camera image until it is sharp and well lit. If nothing works you can manually adjust shutter and focus on the cameras themselves. Start MATLAB and make sure all necessary files and MATLAB-scripts are on your current path. Specify in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) if you are tracking in a skinner box or an arena so that [the correct frameAcquired function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/How-tos#list-of-necessary-files-and-scripts-to-start) will be chosen automatically. To start tracking for the first time, you need the default camera files ([defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat), [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt), and [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt)). Once you start it, the according information (camera settings and tracking parameters; see below) will be written into new files. Adjust the file names and directories in [myHardwareSetup: (11) memory mapped file(s) for gaze tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). You need to specify the serial numbers of your cameras here, so that the files are automatically named correctly. To get the video tracker started, you need to name your tacker object and call the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m), e.g.: vT = startPointGrey. This is a wrapper script to initialize the video tracker object and necessary configurations. It calls the [videoTracker script](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m), which runs the actual tracking. It creates the preview window and extracts the information you change in there, like the camera gain and shutter, to be used by the tracker later. It also writes the memory mapped files, which essentially include the information of where what was tracked in each of your frames.
In the very beginning, you should start by looking at the camera images in FlyCapture. You should configure each camera image until it is sharp and well lit. If nothing works you can manually adjust shutter and focus on the cameras themselves. Start MATLAB and make sure all necessary files and MATLAB-scripts are on your current path. Specify in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) if you are tracking in a skinner box or an arena so that [the correct frameAcquired function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/How-tos#list-of-necessary-files-and-scripts-to-start) will be chosen automatically. To start tracking for the first time, you need the default camera files ([defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat), [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt), and [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt)). Once you start it, the according information (camera settings and tracking parameters; see below) will be written into new files. Adjust the file names and directories in [myHardwareSetup: (11) memory mapped file(s) for gaze tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/master/myHardwareSetup.m). You need to specify the serial numbers of your cameras here, so that the files are automatically named correctly. To get the video tracker started, you need to name your tacker object and call the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m), e.g.: vT = startPointGrey. This is a wrapper script to initialize the video tracker object and necessary configurations. It calls the [videoTracker script](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m), which runs the actual tracking. It creates the preview window and extracts the information you change in there, like the camera gain and shutter, to be used by the tracker later. It also writes the memory mapped files, which essentially include the information of where what was tracked in each of your frames.
Now you can open the preview window for one of your cameras by first stop tracking with that camera
Now you can open the preview window for one of your cameras by first stop tracking with that camera
```matlab
```matlab
vT{i}.stop% i specifies the camera number
vT{i}.stop% i specifies the camera number
```
```
and then calling the preview window.
and then calling the preview window.
```matlab
```matlab
vT{i}.preview
vT{i}.preview
```
```
You can only open the preview for one camera at a time! In the preview you will see your camera image twice – on the left side there is the raw version of it, the right side is the thresholded version. Now you can adjust different settings. First, specify whether you want to track a black object in front of a bright background (‘dark’) or a bright object, e.g. a reflector, in front of a dark background (‘white’) or both simultaneously, e.g. a black bird in the arena with an reflector attached ('both').
You can only open the preview for one camera at a time! In the preview you will see your camera image twice – on the left side there is the raw version of it, the right side is the thresholded version. Now you can adjust different settings. First, specify whether you want to track a black object in front of a bright background (‘dark’) or a bright object, e.g. a reflector, in front of a dark background (‘white’) or both simultaneously, e.g. a black bird in the arena with an reflector attached ('both').
The region of interest of your tracking is specified by your mask. To get started and / or it if you want to use the whole camera image you can select ‘noMask’. Mask needs to be a logical matrix, which is predefined with 'false'. The region that should be used for tracking is specified by setting the respective sections of the matrix to 'true'. The name of the logical itself must be ‘mask’, the name of the file must be one of the predefined mask names (in [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)). You can also change the names the mask needs to have in the code, here you need to make sure to change the names for loading as well as for saving, which are two different sections in the code (adjust cell array 'allMasks' in line 370 and 579 of the [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) function). The mask is applied to the camera image (each element of the matrix corresponds to one pixel of the camera image, [example file](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/arenaScreenshot.mat)). Only those pixels that are set to ‘true’ are pixels in which something will be tracked.
The region of interest of your tracking is specified by your mask. To get started and / or it if you want to use the whole camera image you can select ‘noMask’. Mask needs to be a logical matrix, which is predefined with 'false'. The region that should be used for tracking is specified by setting the respective sections of the matrix to 'true'. The name of the logical itself must be ‘mask’, the name of the file must be one of the predefined mask names (in [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m)). You can also change the names the mask needs to have in the code, here you need to make sure to change the names for loading as well as for saving, which are two different sections in the code (adjust cell array 'allMasks' in line 370 and 579 of the [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m) function). The mask is applied to the camera image (each element of the matrix corresponds to one pixel of the camera image, [example file](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/arenaScreenshot.mat)). Only those pixels that are set to ‘true’ are pixels in which something will be tracked.
```matlab
```matlab
%%% Create custom mask based on camera image screenshot
%%% Create custom mask based on camera image screenshot
dataRaw=load('arenaScreenshot.mat');% black tape to mark boundaries of arena
dataRaw=load('arenaScreenshot.mat');% black tape to mark boundaries of arena
% figure; image(dataRaw.data); % to visualize
% figure; image(dataRaw.data); % to visualize
data=dataRaw.data<35;% identify black pixels within image (manually adjust threshold)
data=dataRaw.data<35;% identify black pixels within image (manually adjust threshold)
dataROI=imfill(dataMargin,'holes');% fill space within hexagon with 'true'
dataROI=imfill(dataMargin,'holes');% fill space within hexagon with 'true'
mask=dataROI(1:2:end,1:2:end);% downscale image/matrix size to 512 x 640
mask=dataROI(1:2:end,1:2:end);% downscale image/matrix size to 512 x 640
%%% Visualize scaled tracking mask (512 x 640) used as tracking mask:
%%% Visualize scaled tracking mask (512 x 640) used as tracking mask:
figure;spy(mask);% visualize created mask
figure;spy(mask);% visualize created mask
% finally save 'mask' logical as 'maskHexagon'
% finally save 'mask' logical as 'maskHexagon'
```
```
**Image thresholding**
**Image thresholding**
As mentioned earlier, the figure on the right side in the preview window contains the tresholded version of the camera image. This means, that here you should see your target object either in black on a white background (target color: black), in white on a black background (target color: white), or both (target color: both). Per default, the initial target color is set to 'black', thus you will see only the black target displayed on a white background. Your target object is represented as 'blob' and its detection is based on a [blob analysis in Matlab](https://de.mathworks.com/help/vision/ref/blobanalysis.html). It is a rectangle by default, but its form can be changed. By changing the value of the Blob Minimum, you can change the size of it in pixel. For tracking in a skinner box, the blob minimum should be set to 1. Here, locating the blob is done via finding the median of all thresholded pixels, and not an ‘actual’ blob analysis. To track a target within your camera image, you need a sufficient contrast between the blob and its surroundings. The camera image contains the RGB information of each pixel, which is translated to only contain the information ‘something is here’ or ‘nothing is here’. This is done by adding together the RGB value and comparing this value to the value specified at Threshold. When tracking a black and white target simultaneously you can specify a 'Threshold' and 'Blob minimum' for both targets individually. Since RGB is an additive color space, a white object is tracked if the Threshold is lower than the sum of the RGB values, while a black object is tracked if the Threshold is higher than the sum of the RGB values. This means if you have trouble tracking your object, you should reduce the value specified at Threshold for a white object and make it bigger for a black object. You should do it the other way around if your problem is not that you cannot track your object, but that you track too much of its surroundings. If you want to track a less distinct target, e.g. a grey pigeon, you might not be able to exclude some parts of the arena from also being tracked. You can circumvent this problem by adjusting the blob minimum or by removing the distracting elements of the arena using a custom mask. In addition to that, you should also adjust the individual camera properties such as brightness, gain, gamma, and shutter as well. Especially useful for removing shadow artifacts is adjusting the value of gain. To save and use your settings, close the preview by selecting the button ‘Stop Preview’. Since you stopped tracking for each of your cameras, you now have to close them (vT{i}.destroy) and restart tracking.
As mentioned earlier, the figure on the right side in the preview window contains the tresholded version of the camera image. This means, that here you should see your target object either in black on a white background (target color: black), in white on a black background (target color: white), or both (target color: both). Per default, the initial target color is set to 'black', thus you will see only the black target displayed on a white background. Your target object is represented as 'blob' and its detection is based on a [blob analysis in Matlab](https://de.mathworks.com/help/vision/ref/blobanalysis.html). It is a rectangle by default, but its form can be changed. By changing the value of the Blob Minimum, you can change the size of it in pixel. For tracking in a skinner box, the blob minimum should be set to 1. Here, locating the blob is done via finding the median of all thresholded pixels, and not an ‘actual’ blob analysis. To track a target within your camera image, you need a sufficient contrast between the blob and its surroundings. The camera image contains the RGB information of each pixel, which is translated to only contain the information ‘something is here’ or ‘nothing is here’. This is done by adding together the RGB value and comparing this value to the value specified at Threshold. When tracking a black and white target simultaneously you can specify a 'Threshold' and 'Blob minimum' for both targets individually. Since RGB is an additive color space, a white object is tracked if the Threshold is lower than the sum of the RGB values, while a black object is tracked if the Threshold is higher than the sum of the RGB values. This means if you have trouble tracking your object, you should reduce the value specified at Threshold for a white object and make it bigger for a black object. You should do it the other way around if your problem is not that you cannot track your object, but that you track too much of its surroundings. If you want to track a less distinct target, e.g. a grey pigeon, you might not be able to exclude some parts of the arena from also being tracked. You can circumvent this problem by adjusting the blob minimum or by removing the distracting elements of the arena using a custom mask. In addition to that, you should also adjust the individual camera properties such as brightness, gain, gamma, and shutter as well. Especially useful for removing shadow artifacts is adjusting the value of gain. To save and use your settings, close the preview by selecting the button ‘Stop Preview’. Since you stopped tracking for each of your cameras, you now have to close them (vT{i}.destroy) and restart tracking.
**Important tracking commands**
**Important tracking commands**
To start tracking, you need to create a video tracker object by calling the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m)
To start tracking, you need to create a video tracker object by calling the [startPointGrey function](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/startPointGrey.m)
```matlab
```matlab
vT=startPointGrey
vT=startPointGrey
```
```
Once you started it, you can open the preview for one of your cameras at a time. To do that you first need to stop the tracking
Once you started it, you can open the preview for one of your cameras at a time. To do that you first need to stop the tracking
```matlab
```matlab
vT{i}.stop
vT{i}.stop
vT{i}.preview
vT{i}.preview
```
```
If you changed values in the preview, its best to destroy each camera and restart the tracking
If you changed values in the preview, its best to destroy each camera and restart the tracking
```matlab
```matlab
vT{i}.destroy
vT{i}.destroy
```
```
When you have adjusted everything and are ready to start an experimental paradigm, you need to run the tracking and the paradigm in two different Matlab instances.
When you have adjusted everything and are ready to start an experimental paradigm, you need to run the tracking and the paradigm in two different Matlab instances.
**What are these files with the camera serial info and what do they do?**
**What are these files with the camera serial info and what do they do?**
- Camera configuration files (either [defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat) or serialNumber_cfg) :arrow_right: Configuration of ROI, brightness, blob values etc.
- Camera configuration files (either [defaultCam_cfg](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_cfg.mat) or serialNumber_cfg) :arrow_right: Configuration of ROI, brightness, blob values etc.
- Camera data document (either [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt) or serialNumber_dat) :arrow_right: memory mapped file, used to save tracking data
- Camera data document (either [defaultCam_dat](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_dat.txt) or serialNumber_dat) :arrow_right: memory mapped file, used to save tracking data
- Camera control files (either [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt) or serialNumber_ctrl) :arrow_right: memory mapped file, used to save event codes
- Camera control files (either [defaultCam_ctrl](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/defaultCam_ctrl.txt) or serialNumber_ctrl) :arrow_right: memory mapped file, used to save event codes
There are supposed to be three files per camera that have each cameras’ serial number in the name (four if you write the binary). You can change the name of the binary by giving a filename input to the [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). To start the tracking, you need the three default files (see above). When you start tacking, the files specifically for your camera settings will be written. To write the data document (_dat file) and the control file (_ctrl file) it is sufficient to just start the tracking, for the configuration matrix you have to open the preview. Be careful to not just close the preview but click the button ‘stop preview’ to save your current configurations into the configuration file. The files contain different types of information important to your tracking. The camera configuration files (cameraSerial _cfg.mat) contain all settings you make when you use the camera preview. To start tracking, you need some default parameters, saved in the default files. You can just copy it and rename it according to your cameras’ serial numbers and if you also adjusted the paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m), anything you change in the preview will be written into the right file. The remaining two files are both ‘memory mapped files’ and are overwritten with every new frame. One specifies the data send to the tracker ('..._ctrl', datIn), the other the results of the tracking/blob analysis ('_dat', datOut). The 'datIn' file (cameraSerial_ctrl.txt) contains four values: **1st** is true per default, if set to -1 the object is destroyed and acquisition stops, **2nd** is -1 per default, if set to true, raw data will be written into binary, **3 & 4** represent event codes (decimal + timepoint per event). The 'datOut' file (cameraSerial_dat.txt) contains eight values:
There are supposed to be three files per camera that have each cameras’ serial number in the name (four if you write the binary). You can change the name of the binary by giving a filename input to the [video tracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). To start the tracking, you need the three default files (see above). When you start tacking, the files specifically for your camera settings will be written. To write the data document (_dat file) and the control file (_ctrl file) it is sufficient to just start the tracking, for the configuration matrix you have to open the preview. Be careful to not just close the preview but click the button ‘stop preview’ to save your current configurations into the configuration file. The files contain different types of information important to your tracking. The camera configuration files (cameraSerial _cfg.mat) contain all settings you make when you use the camera preview. To start tracking, you need some default parameters, saved in the default files. You can just copy it and rename it according to your cameras’ serial numbers and if you also adjusted the paths in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m), anything you change in the preview will be written into the right file. The remaining two files are both ‘memory mapped files’ and are overwritten with every new frame. One specifies the data send to the tracker ('..._ctrl', datIn), the other the results of the tracking/blob analysis ('_dat', datOut). The 'datIn' file (cameraSerial_ctrl.txt) contains four values: **1st** is true per default, if set to -1 the object is destroyed and acquisition stops, **2nd** is -1 per default, if set to true, raw data will be written into binary, **3 & 4** represent event codes (decimal + timepoint per event). The 'datOut' file (cameraSerial_dat.txt) contains eight values:
1. x coordinate of the black object
1. x coordinate of the black object
2. y coordinate of the black object,
2. y coordinate of the black object,
3. angle of the black object,
3. angle of the black object,
4. x coordinate of the white object,
4. x coordinate of the white object,
5. y coordinate of the white object,
5. y coordinate of the white object,
6. angle of the white object,
6. angle of the white object,
7. number of frame,
7. number of frame,
8. time
8. time
The first six values are predefined with 'NaN', which will only be replaced if a corresponding target was tracked. The last file represents a binary file (cameraSerial.txt), for which a custom filename can be specified as additional input argument in the call to the [videoTracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). It contains information from the memory mapped files (tracked positions and event codes). Data will only be saved in this bin file if the user sets the second argument within the 'datIn' file to 1 (true). Make sure that you are actually tracking something. A common error message: “insufficient physical memory”: probably due to too high frame rate :arrow_right: set frame rate in flyCapture or in [myHardwareSetup (11) memory mapped files for gaze tacker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) SETUP.tracker.frameRate, 75 fps for beak tracking (box) or 50 fps for position tracking (arena).
The first six values are predefined with 'NaN', which will only be replaced if a corresponding target was tracked. The last file represents a binary file (cameraSerial.txt), for which a custom filename can be specified as additional input argument in the call to the [videoTracker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/videoTracker.m). It contains information from the memory mapped files (tracked positions and event codes). Data will only be saved in this bin file if the user sets the second argument within the 'datIn' file to 1 (true). Make sure that you are actually tracking something. A common error message: “insufficient physical memory”: probably due to too high frame rate :arrow_right: set frame rate in flyCapture or in [myHardwareSetup (11) memory mapped files for gaze tacker](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m) SETUP.tracker.frameRate, 75 fps for beak tracking (box) or 50 fps for position tracking (arena).
**Conversion matrices**
**Conversion matrices**
Conversion matrices are necessary for using the tracking in a skinner box. The output generated by the camera tracking are the coordinates of the tracked object in each of the cameras. The [keyBuffer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/User-or-animal-response#keybuffer-animal) function requires pixels on the screen. The conversionMatrixCreator scripts can be used to generate a matrix that can transform the camera coordinates into pixels on the screen. To optimally calibrate the tracking, you can either use the [version for birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) – then a bird is supposed to peck the dot on the screen – or the [version for humans](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m), where you can do the pecking. If you use the bird version of the calibrator, you still have to do a manual round of pecking first, so that there will be pre-defined peck fields around the dot. This is done to prevent artifacts in the calibration due to the bird not pecking on the dot, but somewhere else entirely. The calibration will then be done using the data generated by the bird. The code to create the conversion matrices and the camera tracking have to run in two different Matlab instances, just like when running experiments. You can specify the distances and the area in which the dots appear on the screen. After you pecked on all the dots, you need to execute the following sections of the script, where the camera coordinates are transformed and the missing pixels are added via calculation. Start by executing only the second section, as this will also plot your camera values. This can be used as a check of the quality of your calibration. One of the curves in the plot should relatively steadily increase (with plateaus) while the other should oscillate. Then, run the rest of the script and remember to change the names of the matrices in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m).
Conversion matrices are necessary for using the tracking in a skinner box. The output generated by the camera tracking are the coordinates of the tracked object in each of the cameras. The [keyBuffer](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/wikis/User-or-animal-response#keybuffer-animal) function requires pixels on the screen. The conversionMatrixCreator scripts can be used to generate a matrix that can transform the camera coordinates into pixels on the screen. To optimally calibrate the tracking, you can either use the [version for birds](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Birds.m) – then a bird is supposed to peck the dot on the screen – or the [version for humans](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/Tracking/conversionMatrixCreator_Human.m), where you can do the pecking. If you use the bird version of the calibrator, you still have to do a manual round of pecking first, so that there will be pre-defined peck fields around the dot. This is done to prevent artifacts in the calibration due to the bird not pecking on the dot, but somewhere else entirely. The calibration will then be done using the data generated by the bird. The code to create the conversion matrices and the camera tracking have to run in two different Matlab instances, just like when running experiments. You can specify the distances and the area in which the dots appear on the screen. After you pecked on all the dots, you need to execute the following sections of the script, where the camera coordinates are transformed and the missing pixels are added via calculation. Start by executing only the second section, as this will also plot your camera values. This can be used as a check of the quality of your calibration. One of the curves in the plot should relatively steadily increase (with plateaus) while the other should oscillate. Then, run the rest of the script and remember to change the names of the matrices in [myHardwareSetup](https://gitlab.ruhr-uni-bochum.de/ikn/OTBR/-/blob/Feature_CameraTracking/myHardwareSetup.m).