scholarly journals P2.11-13 What Is the Impact of Localised Data When Training Deep Neural Networks for Lung Cancer Prediction?

2019 ◽  
Vol 14 (10) ◽  
pp. S797
Author(s):  
A. Devaraj ◽  
I. Pulzato ◽  
S. Kemp ◽  
C. Ridge ◽  
S. Padley ◽  
...  
2021 ◽  
Author(s):  
Jackson Zhou ◽  
Matloob Khushi ◽  
Mohammad Ali Moni ◽  
Shahadat Uddin ◽  
Simon K. Poon

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 676
Author(s):  
Andrej Zgank

Animal activity acoustic monitoring is becoming one of the necessary tools in agriculture, including beekeeping. It can assist in the control of beehives in remote locations. It is possible to classify bee swarm activity from audio signals using such approaches. A deep neural networks IoT-based acoustic swarm classification is proposed in this paper. Audio recordings were obtained from the Open Source Beehive project. Mel-frequency cepstral coefficients features were extracted from the audio signal. The lossless WAV and lossy MP3 audio formats were compared for IoT-based solutions. An analysis was made of the impact of the deep neural network parameters on the classification results. The best overall classification accuracy with uncompressed audio was 94.09%, but MP3 compression degraded the DNN accuracy by over 10%. The evaluation of the proposed deep neural networks IoT-based bee activity acoustic classification showed improved results if compared to the previous hidden Markov models system.


2018 ◽  
Vol 28 (4) ◽  
pp. 735-744 ◽  
Author(s):  
Michał Koziarski ◽  
Bogusław Cyganek

Abstract Due to the advances made in recent years, methods based on deep neural networks have been able to achieve a state-of-the-art performance in various computer vision problems. In some tasks, such as image recognition, neural-based approaches have even been able to surpass human performance. However, the benchmarks on which neural networks achieve these impressive results usually consist of fairly high quality data. On the other hand, in practical applications we are often faced with images of low quality, affected by factors such as low resolution, presence of noise or a small dynamic range. It is unclear how resilient deep neural networks are to the presence of such factors. In this paper we experimentally evaluate the impact of low resolution on the classification accuracy of several notable neural architectures of recent years. Furthermore, we examine the possibility of improving neural networks’ performance in the task of low resolution image recognition by applying super-resolution prior to classification. The results of our experiments indicate that contemporary neural architectures remain significantly affected by low image resolution. By applying super-resolution prior to classification we were able to alleviate this issue to a large extent as long as the resolution of the images did not decrease too severely. However, in the case of very low resolution images the classification accuracy remained considerably affected.


Author(s):  
Maria Refinetti ◽  
Stéphane d'Ascoli ◽  
Ruben Ohana ◽  
Sebastian Goldt

Abstract Direct Feedback Alignment (DFA) is emerging as an eficient and biologically plausible alternative to backpropagation for training deep neural networks. Despite relying on random feedback weights for the backward pass, DFA successfully trains state-of-the-art models such as Transformers. On the other hand, it notoriously fails to train convolutional networks. An understanding of the inner workings of DFA to explain these diverging results remains elusive. Here, we propose a theory of feedback alignment algorithms. We ffrst show that learning in shallow networks proceeds in two steps: an alignment phase, where the model adapts its weights to align the approximate gradient with the true gradient of the loss function, is followed by a memorisation phase, where the model focuses on fftting the data. This two-step process has a degeneracy breaking eflect: out of all the low-loss solutions in the landscape, a network trained with DFA naturally converges to the solution which maximises gradient alignment. We also identify a key quantity underlying alignment in deep linear networks: the conditioning of the alignment matrices. The latter enables a detailed understanding of the impact of data structure on alignment, and suggests a simple explanation for the well-known failure of DFA to train convolutional neural networks. Numerical experiments on MNIST and CIFAR10 clearly demonstrate degeneracy breaking in deep non-linear networks and show that the align-then-memorize process occurs sequentially from the bottom layers of the network to the top.


2021 ◽  
pp. 1-28
Author(s):  
Hai Guo ◽  
Yifan Song ◽  
Haoran Tang ◽  
Jingying Zhao

In recent years, lakes pollution has become increasingly serious, so water quality monitoring is becoming increasingly important. The concentration of total organic carbon (TOC) in lakes is an important indicator for monitoring the emission of organic pollutants. Therefore, it is of great significance to determine the TOC concentration in lakes. In this paper, the water quality dataset of the middle and lower reaches of the Yangtze River is obtained, and then the temperature, transparency, pH value, dissolved oxygen, conductivity, chlorophyll and ammonia nitrogen content are taken as the impact factors, and the stacking of different epochs’ deep neural networks (SDE-DNN) model is constructed to predict the TOC concentration in water. Five deep neural networks and linear regression are integrated into a strong prediction model by the stacking ensemble method. The experimental results show the prediction performance, the Nash-Sutcliffe efficiency coefficient (NSE) is 0.5312, the mean absolute error (MAE) is 0.2108 mg/L, the symmetric mean absolute percentage error (SMAPE) is 43.92%, and the root mean squared error (RMSE) is 0.3064 mg/L. The model has good prediction performance for the TOC concentration in water. Compared with the common machine learning models, traditional ensemble learning models and existing TOC prediction methods, the prediction error of this model is lower, and it is more suitable for predicting the TOC concentration. The model can use a wireless sensor network to obtain water quality data, thus predicting the TOC concentration of lakes in real time, reducing the cost of manual testing, and improving the detection efficiency.


2020 ◽  
Author(s):  
Pierre Jacquier ◽  
Azzedine Abdedou ◽  
Azzeddine Soulaïmani

<p><strong>Key Words</strong>: Uncertainty Quantification, Deep Learning, Space-Time POD, Flood Modeling</p><p><br>While impressive results have been achieved in the well-known fields where Deep Learning allowed for breakthroughs such as computer vision, language modeling, or content generation [1], its impact on different, older fields is still vastly unexplored. In computational fluid dynamics and especially in Flood Modeling, many phenomena are very high-dimensional, and predictions require the use of finite element or volume methods, which can be, while very robust and tested, computational-heavy and may not prove useful in the context of real-time predictions. This led to various attempts at developing Reduced-Order Modeling techniques, both intrusive and non-intrusive. One late relevant addition was a combination of Proper Orthogonal Decomposition with Deep Neural Networks (POD-NN) [2]. Yet, to our knowledge, in this example and more generally in the field, little work has been conducted on quantifying uncertainties through the surrogate model.<br>In this work, we aim at comparing different novel methods addressing uncertainty quantification in reduced-order models, pushing forward the POD-NN concept with ensembles, latent-variable models, as well as encoder-decoder models. These are tested on benchmark problems, and then applied to a real-life application: flooding predictions in the Mille-Iles river in Laval, QC, Canada.<br>For the flood modeling application, our setup involves a set of input parameters resulting from onsite measures. High-fidelity solutions are then generated using our own finite-volume code CuteFlow, which is solving the highly nonlinear Shallow Water Equations. The goal is then to build a non-intrusive surrogate model, that’s able to <em>know what it know</em>s, and more importantly, <em>know when it doesn’t</em>, which is still an open research area as far as neural networks are concerned [3].</p><p><br><strong>REFERENCES</strong><br>[1] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning”, in Thirty-First AAAI Conference on Artificial Intelligence, 2017.<br>[2] Q. Wang, J. S. Hesthaven, and D. Ray, “Non-intrusive reduced order modeling of unsteady flows using artificial neural networks with application to a combustion problem”, Journal of Computational Physics, vol. 384, pp. 289–307, May 2019.<br>[3] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles”, in Advances in Neural Information Processing Systems, 2017, pp. 6402–6413.</p>


Sign in / Sign up

Export Citation Format

Share Document