Limbs Detection and Tracking of Head-Fixed Mice for Behavioral Phenotyping Using Motion Tubes and Deep Learning

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5% in detection accuracy even when as little as 10% of the available data is used for training.

References Powered by Scopus

Long Short-Term Memory

77222Citations
N/AReaders
Get full text

SLIC superpixels compared to state-of-the-art superpixel methods

8234Citations
N/AReaders
Get full text

Determining optical flow

6519Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Non-consummatory behavior signals predict aversion-resistant alcohol drinking in head-fixed mice

1Citations
N/AReaders
Get full text

Artificial Intelligence Image Recognition System for Preventing Wrong-Site Upper Limb Surgery

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Abbas, W., Masip, D., & Giovannucci, A. (2020). Limbs Detection and Tracking of Head-Fixed Mice for Behavioral Phenotyping Using Motion Tubes and Deep Learning. IEEE Access, 8, 37891–37901. https://doi.org/10.1109/ACCESS.2020.2975926

Readers over time

‘20‘21‘23‘2402468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

67%

Professor / Associate Prof. 2

22%

Researcher 1

11%

Readers' Discipline

Tooltip

Computer Science 4

44%

Neuroscience 2

22%

Engineering 2

22%

Chemistry 1

11%

Article Metrics

Tooltip
Mentions
News Mentions: 2

Save time finding and organizing research with Mendeley

Sign up for free
0