Abstract
Social behavior is an important aspect of rodent models in behavioral neuroscience. Abnormal social behavior can indicate the onset of conditions such as Huntington’s disease. Studying social behavior requires to objectively quantify the occurrence of specific rodent interactions. While this can be done manually by annotating occurrences in videos, manual
... read more
annotation is a time-consuming and sometimes subjective process. Human observers need five to ten times as long as the length of the video. We therefore aim to reduce the manual effort by automating the annotation process. Automated annotation involves a computational model that distinguishes between the different behaviors using visual information from the video such as the relative motion and pose of the rodents. Before the model can be applied, it is trained with labeled examples of every behavior. Rodent social behavior classification is a challenging task. The classification method has to deal with highly unbalanced occurrence rates of the different behaviors, causing less frequent behaviors to be underrepresented. Furthermore, behavior categories sometimes leave room for interpretation which causes even human observers to disagree on specific occurrences. Similarly, the precise temporal extent of interactions is often ambiguous. Finally, tracking multiple, visually similar rodents is a demanding task, in particular during close-contact interactions where occlusion is frequent. We find that limited tracking quality inhibits the recognition of close-contact interactions. Once a classification model is trained for a set of interactions, it can be applied to novel videos recorded in the same environment. We demonstrate that it can be difficult to comply to the requirements of a constant environment because they include not only controllable, external factors such as illumination and cage size, but also variations in the tested animal population. In a cross-dataset experiment we use juvenile and adult rats to show that behavior variations due to age can reduce recognition accuracy. We argue for adequate cross-dataset validation and more research into adaptation methods to deal with such variations systematically. If no previous classification model is available, for example because behavior categories are changed or added, the human observer is left with manual annotation. We aim to reduce the effort in such scenarios by formulating the annotation task as an interactive labeling problem. The human starts annotating examples of interactions while the classifier learns to distinguish them. Once the classifier has learned sufficiently, it may take over the annotation and alleviate the user from much of the work. To reduce the time further, we experiment with different strategies that guide the user to annotate particularly useful interaction examples. We demonstrate that placing the human in the annotation loop reduces the annotation time substantially compared to traditional, sequential labeling. Participants in a user study trained an accurate classifier in less than half an hour which allowed to propagate the annotations throughout the remaining two hours of the videos. This interactive annotation approach enables neuroscientists to analyze behavioral data quicker than before and to study previous data in new light with limited manual work.
show less