scholarly journals Ultrasound Segmentation of Cervical Muscle during head motion: a Dataset and a Benchmark using Deconvolutional Neural Networks

Author(s):  
Ryan Cunningham ◽  
María B. Sánchez ◽  
Ian D. Loram

Objectives: To automate online segmentation of cervical muscles from transverse ultrasound (US) images of the human neck during functional head movement. To extend ground-truth labelling methodology beyond dependence upon MRI imaging of static head positions required for application to participants with involuntary movement disorders. Method: We collected sustained sequences (> 3 minutes) of US images of human posterior cervical neck muscles at 25 fps from 28 healthy adults, performing visually-guided pitch and yaw head motions. We sampled 1,100 frames (approx. 40 per participant) spanning the experimental range of head motion. We manually labelled all 1,100 US images and trained deconvolutional neural networks (DCNN) with a spatial SoftMax regression layer to classify every pixel in the full resolution (525x491) US images, as one of 14 classes (10 muscles, ligamentum nuchae, vertebra, skin, background). We investigated ‘MaxOut’ and Exponential Linear unit (ELU) transfer functions and compared with our previous benchmark (analytical shape modelling). Results: These DCNNs showed higher Jaccard Index (53.2%) and lower Hausdorff Distance (5.7 mm) than the previous benchmark (40.5%, 6.2 mm). SoftMax Confidence corresponded with correct classification. ‘MaxOut’ outperformed ELU marginally. Conclusion: The DCNN architecture accommodates challenging images and imperfect manual labels. The SoftMax layer gives user feedback of likely correct classification. The ‘MaxOut’ transfer function benefits from near-linear operation, compatibility with deconvolution operations and the dropout regulariser. Significance: This methodology for labelling ground-truth and training automated labelling networks is applicable for dynamic segmentation of moving muscles and for participants with involuntary movement disorders who cannot remain still.

2020 ◽  
Author(s):  
Jingbai Li ◽  
Patrick Reiser ◽  
André Eberhard ◽  
Pascal Friederich ◽  
Steven Lopez

<p>Photochemical reactions are being increasingly used to construct complex molecular architectures with mild and straightforward reaction conditions. Computational techniques are increasingly important to understand the reactivities and chemoselectivities of photochemical isomerization reactions because they offer molecular bonding information along the excited-state(s) of photodynamics. These photodynamics simulations are resource-intensive and are typically limited to 1–10 picoseconds and 1,000 trajectories due to high computational cost. Most organic photochemical reactions have excited-state lifetimes exceeding 1 picosecond, which places them outside possible computational studies. Westermeyr <i>et al.</i> demonstrated that a machine learning approach could significantly lengthen photodynamics simulation times for a model system, methylenimmonium cation (CH<sub>2</sub>NH<sub>2</sub><sup>+</sup>).</p><p>We have developed a Python-based code, Python Rapid Artificial Intelligence <i>Ab Initio</i> Molecular Dynamics (PyRAI<sup>2</sup>MD), to accomplish the unprecedented 10 ns <i>cis-trans</i> photodynamics of <i>trans</i>-hexafluoro-2-butene (CF<sub>3</sub>–CH=CH–CF<sub>3</sub>) in 3.5 days. The same simulation would take approximately 58 years with ground-truth multiconfigurational dynamics. We proposed an innovative scheme combining Wigner sampling, geometrical interpolations, and short-time quantum chemical trajectories to effectively sample the initial data, facilitating the adaptive sampling to generate an informative and data-efficient training set with 6,232 data points. Our neural networks achieved chemical accuracy (mean absolute error of 0.032 eV). Our 4,814 trajectories reproduced the S<sub>1</sub> half-life (60.5 fs), the photochemical product ratio (<i>trans</i>: <i>cis</i> = 2.3: 1), and autonomously discovered a pathway towards a carbene. The neural networks have also shown the capability of generalizing the full potential energy surface with chemically incomplete data (<i>trans</i> → <i>cis</i> but not <i>cis</i> → <i>trans</i> pathways) that may offer future automated photochemical reaction discoveries.</p>


Animals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 721
Author(s):  
Krzysztof Adamczyk ◽  
Wilhelm Grzesiak ◽  
Daniel Zaborski

The aim of the present study was to verify whether artificial neural networks (ANN) may be an effective tool for predicting the culling reasons in cows based on routinely collected first-lactation records. Data on Holstein-Friesian cows culled in Poland between 2017 and 2018 were used in the present study. A general discriminant analysis (GDA) was applied as a reference method for ANN. Considering all predictive performance measures, ANN were the most effective in predicting the culling of cows due to old age (99.76–99.88% of correctly classified cases). In addition, a very high correct classification rate (99.24–99.98%) was obtained for culling the animals due to reproductive problems. It is significant because infertility is one of the conditions that are the most difficult to eliminate in dairy herds. The correct classification rate for individual culling reasons obtained with GDA (0.00–97.63%) was, in general, lower than that for multilayer perceptrons (MLP). The obtained results indicated that, in order to effectively predict the previously mentioned culling reasons, the following first-lactation parameters should be used: calving age, calving difficulty, and the characteristics of the lactation curve based on Wood’s model parameters.


2021 ◽  
Vol 23 ◽  
pp. 100313
Author(s):  
Nicholas A. Thurn ◽  
Taylor Wood ◽  
Mary R. Williams ◽  
Michael E. Sigman

2020 ◽  
Author(s):  
Mathilda Froesel ◽  
Quentin Goudard ◽  
Marc Hauser ◽  
Maëva Gacoin ◽  
Suliann Ben Hamed

AbstractBackgroundHeart rate is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training.New MethodIn the present study, we implement a Eulerian Video Magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx.ResultsWe show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent heart rate changes following the presentation of conspecific emotional voices or faces.Comparison with Existing Method(s)Eulerian Video Magnification (EVM) is used to extract heart rate in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques heart rate from RGB videos. In contrast, our method allows to extract awake macaques heart rate from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys.ConclusionsOverall, we believe that this method can be generalized as a tool to track heart rate of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes.HighlightsHeart rate varies during complex non-human primate (NHP) behaviour and cognition.We apply Eulerian Video Magnification to track NHP heart rate (EVM-HR).EVM-HR can be used with RGB & IR videos, and anesthetized or awake NHPs.NHP EVM-HR vary with emotional content of presented stimuli.EVM-HR is of interest to ethology, behavioural, neuroscience & welfare purposes.


Author(s):  
Zhengsu Chen ◽  
Jianwei Niu ◽  
Xuefeng Liu ◽  
Shaojie Tang

Convolutional neural networks (CNNs) have achieved remarkable success in image recognition. Although the internal patterns of the input images are effectively learned by the CNNs, these patterns only constitute a small proportion of useful patterns contained in the input images. This can be attributed to the fact that the CNNs will stop learning if the learned patterns are enough to make a correct classification. Network regularization methods like dropout and SpatialDropout can ease this problem. During training, they randomly drop the features. These dropout methods, in essence, change the patterns learned by the networks, and in turn, forces the networks to learn other patterns to make the correct classification. However, the above methods have an important drawback. Randomly dropping features is generally inefficient and can introduce unnecessary noise. To tackle this problem, we propose SelectScale. Instead of randomly dropping units, SelectScale selects the important features in networks and adjusts them during training. Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.


2019 ◽  
Vol 11 (4) ◽  
pp. 1 ◽  
Author(s):  
Tobias de Taillez ◽  
Florian Denk ◽  
Bojana Mirkovic ◽  
Birger Kollmeier ◽  
Bernd T. Meyer

Diferent linear models have been proposed to establish a link between an auditory stimulus and the neurophysiological response obtained through electroencephalography (EEG). We investigate if non-linear mappings can be modeled with deep neural networks trained on continuous speech envelopes and EEG data obtained in an auditory attention two-speaker scenario. An artificial neural network was trained to predict the EEG response related to the attended and unattended speech envelopes. After training, the properties of the DNN-based model are analyzed by measuring the transfer function between input envelopes and predicted EEG signals by using click-like stimuli and frequency sweeps as input patterns. Using sweep responses allows to separate the linear and nonlinear response components also with respect to attention. The responses from the model trained on normal speech resemble event-related potentials despite the fact that the DNN was not trained to reproduce such patterns. These responses are modulated by attention, since we obtain significantly lower amplitudes at latencies of 110 ms, 170 ms and 300 ms after stimulus presentation for unattended processing in contrast to the attended. The comparison of linear and nonlinear components indicates that the largest contribution arises from linear processing (75%), while the remaining 25% are attributed to nonlinear processes in the model. Further, a spectral analysis showed a stronger 5 Hz component in modeled EEG for attended in contrast to unattended predictions. The results indicate that the artificial neural network produces responses consistent with recent findings and presents a new approach for quantifying the model properties.


Author(s):  
Rui Xu ◽  
Donald C. Wunsch II

To classify objects based on their features and characteristics is one of the most important and primitive activities of human beings. The task becomes even more challenging when there is no ground truth available. Cluster analysis allows new opportunities in exploring the unknown nature of data through its aim to separate a finite data set, with little or no prior information, into a finite and discrete set of “natural,” hidden data structures. Here, the authors introduce and discuss clustering algorithms that are related to machine learning and computational intelligence, particularly those based on neural networks. Neural networks are well known for their good learning capabilities, adaptation, ease of implementation, parallelization, speed, and flexibility, and they have demonstrated many successful applications in cluster analysis. The applications of cluster analysis in real world problems are also illustrated. Portions of the chapter are taken from Xu and Wunsch (2008).


Sign in / Sign up

Export Citation Format

Share Document