scholarly journals An Assistive Computer Vision Tool to Automatically Detect Changes in Fish Behavior In Response to Ambient Odor

2020 ◽  
Author(s):  
Sreya Banerjee ◽  
Lauren Alvey ◽  
Paula Brown ◽  
Sophie Yue ◽  
Lei Li ◽  
...  

The analysis of fish behavior in response to odor stimulation is a crucial component of the general study of cross-modal sensory integration in vertebrates. In zebrafish, the centrifugal pathway runs between the olfactory bulb and the neural retina, originating at the terminalis neuron in the olfactory bulb. Any changes in the ambient odor of a fish's environment warrants a change in visual sensitivity and can trigger mating-like behavior in males due to increased GnRH signaling in the terminalis neuron. Behavioral experiments to study this phenomenon are commonly conducted in a controlled environment where a video of the fish is recorded over time before and after the application of chemicals to the water. Given the subtleties of behavioral change, trained biologists are currently required to annotate such videos as part of a study. This process of manually analyzing the videos is time-consuming, requires multiple experts to avoid human error/bias and cannot be easily crowdsourced on the Internet. Machine learning algorithms from computer vision, on the other hand, have proven to be effective for video annotation tasks because they are fast, accurate, and, if designed properly, can be less biased than humans. In this work, we propose to automate the entire process of analyzing videos of behavior changes in zebrafish by using tools from computer vision, relying on minimal expert supervision. The overall objective of this work is to create a generalized tool to predict animal behaviors from videos using state-of-the-art deep learning models, with the dual goal of advancing understanding in biology and engineering a more robust and powerful artificial information processing system for biologists.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sreya Banerjee ◽  
Lauren Alvey ◽  
Paula Brown ◽  
Sophie Yue ◽  
Lei Li ◽  
...  

AbstractThe analysis of fish behavior in response to odor stimulation is a crucial component of the general study of cross-modal sensory integration in vertebrates. In zebrafish, the centrifugal pathway runs between the olfactory bulb and the neural retina, originating at the terminalis neuron in the olfactory bulb. Any changes in the ambient odor of a fish’s environment warrant a change in visual sensitivity and can trigger mating-like behavior in males due to increased GnRH signaling in the terminalis neuron. Behavioral experiments to study this phenomenon are commonly conducted in a controlled environment where a video of the fish is recorded over time before and after the application of chemicals to the water. Given the subtleties of behavioral change, trained biologists are currently required to annotate such videos as part of a study. This process of manually analyzing the videos is time-consuming, requires multiple experts to avoid human error/bias and cannot be easily crowdsourced on the Internet. Machine learning algorithms from computer vision, on the other hand, have proven to be effective for video annotation tasks because they are fast, accurate, and, if designed properly, can be less biased than humans. In this work, we propose to automate the entire process of analyzing videos of behavior changes in zebrafish by using tools from computer vision, relying on minimal expert supervision. The overall objective of this work is to create a generalized tool to predict animal behaviors from videos using state-of-the-art deep learning models, with the dual goal of advancing understanding in biology and engineering a more robust and powerful artificial information processing system for biologists.


Author(s):  
Shiyu Deng ◽  
Chaitanya Kulkarni ◽  
Tianzi Wang ◽  
Jacob Hartman-Kenzler ◽  
Laura E. Barnes ◽  
...  

Context dependent gaze metrics, derived from eye movements explicitly associated with how a task is being performed, are particularly useful for formative assessment that includes feedback on specific behavioral adjustments for skill acquisitions. In laparoscopic surgery, context dependent gaze metrics are under investigated and commonly derived by either qualitatively inspecting the videos frame by frame or mapping the fixations onto a static surgical task field. This study collected eye-tracking and video data from 13 trainees practicing the peg transfer task. Machine learning algorithms in computer vision were employed to derive metrics of tool speed, fixation rate on (moving or stationary) target objects, and fixation rate on tool-object combination. Preliminary results from a clustering analysis on the measurements from 499 practice trials indicated that the metrics were able to differentiate three skill levels amongst the trainees, suggesting high sensitivity and potential of context dependent gaze metrics for surgical assessment.


Author(s):  
Bingshan Niu ◽  
Guangyao Li ◽  
Fang Peng ◽  
Jing Wu ◽  
Long Zhang ◽  
...  

Author(s):  
Yizhi Zhou ◽  
Hong Yu ◽  
Junfeng Wu ◽  
Zhen Cui ◽  
Fangyan Zhang

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6713
Author(s):  
Andrzej Brodzicki ◽  
Joanna Jaworek-Korjakowska ◽  
Pawel Kleczek ◽  
Megan Garland ◽  
Matthew Bogyo

Clostridioides difficile infection (CDI) is an enteric bacterial disease that is increasing in incidence worldwide. Symptoms of CDI range from mild diarrhea to severe life-threatening inflammation of the colon. While antibiotics are standard-of-care treatments for CDI, they are also the biggest risk factor for development of CDI and recurrence. Therefore, novel therapies that successfully treat CDI and protect against recurrence are an unmet clinical need. Screening for novel drug leads is often tested by manual image analysis. The process is slow, tedious and is subject to human error and bias. So far, little work has focused on computer-aided screening for drug leads based on fluorescence images. Here, we propose a novel method to identify characteristic morphological changes in human fibroblast cells exposed to C. difficile toxins based on computer vision algorithms supported by deep learning methods. Classical image processing algorithms for the pre-processing stage are used together with an adjusted pre-trained deep convolutional neural network responsible for cell classification. In this study, we take advantage of transfer learning methodology by examining pre-trained VGG-19, ResNet50, Xception, and DenseNet121 convolutional neural network (CNN) models with adjusted, densely connected classifiers. We compare the obtained results with those of other machine learning algorithms and also visualize and interpret them. The proposed models have been evaluated on a dataset containing 369 images with 6112 cases. DenseNet121 achieved the highest results with a 93.5% accuracy, 92% sensitivity, and 95% specificity, respectively.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2684 ◽  
Author(s):  
Obed Tettey Nartey ◽  
Guowu Yang ◽  
Sarpong Kwadwo Asare ◽  
Jinzhao Wu ◽  
Lady Nadia Frempong

Traffic sign recognition is a classification problem that poses challenges for computer vision and machine learning algorithms. Although both computer vision and machine learning techniques have constantly been improved to solve this problem, the sudden rise in the number of unlabeled traffic signs has become even more challenging. Large data collation and labeling are tedious and expensive tasks that demand much time, expert knowledge, and fiscal resources to satisfy the hunger of deep neural networks. Aside from that, the problem of having unbalanced data also poses a greater challenge to computer vision and machine learning algorithms to achieve better performance. These problems raise the need to develop algorithms that can fully exploit a large amount of unlabeled data, use a small amount of labeled samples, and be robust to data imbalance to build an efficient and high-quality classifier. In this work, we propose a novel semi-supervised classification technique that is robust to small and unbalanced data. The framework integrates weakly-supervised learning and self-training with self-paced learning to generate attention maps to augment the training set and utilizes a novel pseudo-label generation and selection algorithm to generate and select pseudo-labeled samples. The method improves the performance by: (1) normalizing the class-wise confidence levels to prevent the model from ignoring hard-to-learn samples, thereby solving the imbalanced data problem; (2) jointly learning a model and optimizing pseudo-labels generated on unlabeled data; and (3) enlarging the training set to satisfy the hunger of deep learning models. Extensive evaluations on two public traffic sign recognition datasets demonstrate the effectiveness of the proposed technique and provide a potential solution for practical applications.


Author(s):  
Osama Alfarraj ◽  
Amr Tolba

Abstract The computer vision (CV) paradigm is introduced to improve the computational and processing system efficiencies through visual inputs. These visual inputs are processed using sophisticated techniques for improving the reliability of human–machine interactions (HMIs). The processing of visual inputs requires multi-level data computations for achieving application-specific reliability. Therefore, in this paper, a two-level visual information processing (2LVIP) method is introduced to meet the reliability requirements of HMI applications. The 2LVIP method is used for handling both structured and unstructured data through classification learning to extract the maximum gain from the inputs. The introduced method identifies the gain-related features on its first level and optimizes the features to improve information gain. In the second level, the error is reduced through a regression process to stabilize the precision to meet the HMI application demands. The two levels are interoperable and fully connected to achieve better gain and precision through the reduction in information processing errors. The analysis results show that the proposed method achieves 9.42% higher information gain and a 6.51% smaller error under different classification instances compared with conventional methods.


2019 ◽  
Vol 263 ◽  
pp. 288-298 ◽  
Author(s):  
Innocent Nyalala ◽  
Cedric Okinda ◽  
Luke Nyalala ◽  
Nelson Makange ◽  
Qi Chao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document