scholarly journals Comparing human and convolutional neural network performance on scene segmentation

2017 ◽  
Vol 17 (10) ◽  
pp. 1344
Author(s):  
Noor Seijdel ◽  
Max Losch ◽  
Edward De haan ◽  
Steven Scholte
2020 ◽  
Vol 162 (12) ◽  
pp. 3067-3080
Author(s):  
Yizhou Wan ◽  
Roushanak Rahmat ◽  
Stephen J. Price

Abstract Background Measurement of volumetric features is challenging in glioblastoma. We investigate whether volumetric features derived from preoperative MRI using a convolutional neural network–assisted segmentation is correlated with survival. Methods Preoperative MRI of 120 patients were scored using Visually Accessible Rembrandt Images (VASARI) features. We trained and tested a multilayer, multi-scale convolutional neural network on multimodal brain tumour segmentation challenge (BRATS) data, prior to testing on our dataset. The automated labels were manually edited to generate ground truth segmentations. Network performance for our data and BRATS data was compared. Multivariable Cox regression analysis corrected for multiple testing using the false discovery rate was performed to correlate clinical and imaging variables with overall survival. Results Median Dice coefficients in our sample were (1) whole tumour 0.94 (IQR, 0.82–0.98) compared to 0.91 (IQR, 0.83–0.94 p = 0.012), (2) FLAIR region 0.84 (IQR, 0.63–0.95) compared to 0.81 (IQR, 0.69–0.8 p = 0.170), (3) contrast-enhancing region 0.91 (IQR, 0.74–0.98) compared to 0.83 (IQR, 0.78–0.89 p = 0.003) and (4) necrosis region were 0.82 (IQR, 0.47–0.97) compared to 0.67 (IQR, 0.42–0.81 p = 0.005). Contrast-enhancing region/tumour core ratio (HR 4.73 [95% CI, 1.67–13.40], corrected p = 0.017) and necrotic core/tumour core ratio (HR 8.13 [95% CI, 2.06–32.12], corrected p = 0.011) were independently associated with overall survival. Conclusion Semi-automated segmentation of glioblastoma using a convolutional neural network trained on independent data is robust when applied to routine clinical data. The segmented volumes have prognostic significance.


2020 ◽  
Vol 127 ◽  
pp. 21-29 ◽  
Author(s):  
Julia K. Winkler ◽  
Katharina Sies ◽  
Christine Fink ◽  
Ferdinand Toberer ◽  
Alexander Enk ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Lixing Huang ◽  
Jietao Diao ◽  
Hongshan Nie ◽  
Wei Wang ◽  
Zhiwei Li ◽  
...  

The memristor-based convolutional neural network (CNN) gives full play to the advantages of memristive devices, such as low power consumption, high integration density, and strong network recognition capability. Consequently, it is very suitable for building a wearable embedded application system and has broad application prospects in image classification, speech recognition, and other fields. However, limited by the manufacturing process of memristive devices, high-precision weight devices are currently difficult to be applied in large-scale. In the same time, high-precision neuron activation function also further increases the complexity of network hardware implementation. In response to this, this paper proposes a configurable full-binary convolutional neural network (CFB-CNN) architecture, whose inputs, weights, and neurons are all binary values. The neurons are proportionally configured to two modes for different non-ideal situations. The architecture performance is verified based on the MNIST data set, and the influence of device yield and resistance fluctuations under different neuron configurations on network performance is also analyzed. The results show that the recognition accuracy of the 2-layer network is about 98.2%. When the yield rate is about 64% and the hidden neuron mode is configured as −1 and +1, namely ±1 MD, the CFB-CNN architecture achieves about 91.28% recognition accuracy. Whereas the resistance variation is about 26% and the hidden neuron mode configuration is 0 and 1, namely 01 MD, the CFB-CNN architecture gains about 93.43% recognition accuracy. Furthermore, memristors have been demonstrated as one of the most promising devices in neuromorphic computing for its synaptic plasticity. Therefore, the CFB-CNN architecture based on memristor is SNN-compatible, which is verified using the number of pulses to encode pixel values in this paper.


2021 ◽  
Author(s):  
Lachlan D Barnes ◽  
Kevin Lee ◽  
Andreas W Kempa-Liehr ◽  
Luke E Hallum

AbstractSleep apnea (SA) is a common disorder involving the cessation of breathing during sleep. It can cause daytime hypersomnia, accidents, and, if allowed to progress, serious, chronic conditions. Continuous positive airway pressure is an effective SA treatment. However, long waitlists impede timely diagnosis; overnight sleep studies involve trained technicians scoring a polysomnograph, which comprises multiple physiological signals including multi-channel electroencephalography (EEG). Therefore, it is important to develop simplified and automated approaches to detect SA. We have developed an explainable convolutional neural network (CNN) to detect SA from single-channel EEG recordings which generalizes across subjects. The network architecture consisted of three convolutional layers. We tuned hyperparameters using the Hyperband algorithm, optimized parameters using Adam, and quantified network performance with subjectwise 10-fold cross-validation. Our CNN performed with an accuracy of 76.7% and a Matthews correlation coefficient (MCC) of 0.54. This performance was reliably above the conservative baselines of 50% (accuracy) and 0.0 (MCC). To explain the mechanisms of our trained network, we used critical-band masking (CBM): after training, we added bandlimited noise to test recordings; we parametrically varied the noise band center frequency and noise intensity, quantifying the deleterious effect on performance. We reconciled the effects of CBM with lesioning, wherein we zeroed the trained network’s 1st-layer filter kernels in turn, quantifying the deleterious effect on performance. These analyses indicated that the network learned frequency-band information consistent with known SA biomarkers, specifically, delta and beta band activity. Our results indicate single-channel EEG may have clinical potential for SA diagnosis.


Author(s):  
Girindra Wardhana ◽  
Hamid Naghibi ◽  
Beril Sirmacek ◽  
Momen Abayazid

Abstract Purpose We investigated the parameter configuration in the automatic liver and tumor segmentation using a convolutional neural network based on 2.5D model. The implementation of 2.5D model shows promising results since it allows the network to have a deeper and wider network architecture while still accommodates the 3D information. However, there has been no detailed investigation of the parameter configurations on this type of network model. Methods Some parameters, such as the number of stacked layers, image contrast, and the number of network layers, were studied and implemented on neural networks based on 2.5D model. Networks are trained and tested by utilizing the dataset from liver and tumor segmentation challenge (LiTS). The network performance was further evaluated by comparing the network segmentation with manual segmentation from nine technical physicians and an experienced radiologist. Results Slice arrangement testing shows that multiple stacked layers have better performance than a single-layer network. However, the dice scores start decreasing when the number of stacked layers is more than three layers. Adding higher number of layers would cause overfitting on the training set. In contrast enhancement test, implementing contrast enhancement method did not show a statistically significant different to the network performance. While in the network layer test, adding more layers to the network architecture does not always correspond to the increasing dice score result of the network. Conclusions This paper compares the performance of the network based on 2.5D model using different parameter configurations. The result obtained shows the effect of each parameter and allow the selection of the best configuration in order to improve the network performance in the application of automatic liver and tumor segmentation.


Author(s):  
Haiming Liu ◽  
Shixuan Guan ◽  
Weizhong Lu ◽  
Haiou Li ◽  
Hongjie Wu

The growth state of flowers is affected by many factors such as temperature, humidity, and light. Therefore, the maintenance of flowers often requires more professional knowledge. Ordinary people are often at a loss when face with various flower representations and do not know where the problem is. In response to the above problems, this article proposes the use of deep learning to identify the growth status of flowers to assist people in successfully raising flowers. In this article, we propose that the mainstream convolutional neural network has the limitation of only inputting images. In terms of network input, data of the current growth environment of flowers will also be input to supplement the input data of the network. In view of the lack of information interaction in the network, in terms of network structure, the shallow and deep characteristics of the network are integrated to make the network performance more advantageous. Experiments show that this method can effectively improve the recognition rate of flower growth status, so as to correctly distinguish the current growth status of flowers.


Sign in / Sign up

Export Citation Format

Share Document