Mapping of hydraulic transmissivity field from inversion of tracer test data using convolutional neural networks. CNN-2T

2022 ◽  
pp. 127443
Author(s):  
M.T. Vu ◽  
A. Jardani
2020 ◽  
Vol 12 (5) ◽  
pp. 765 ◽  
Author(s):  
Calimanut-Ionut Cira ◽  
Ramon Alcarria ◽  
Miguel-Ángel Manso-Callejo ◽  
Francisco Serradilla

Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. In addition, the road network plays an important part in transportation, and currently one of the main related challenges is detecting and monitoring the occurring changes in order to update the existent cartography. This task is challenging due to the nature of the object (continuous and often with no clearly defined borders) and the nature of remotely sensed images (noise, obstructions). In this paper, we propose a novel framework based on convolutional neural networks (CNNs) to classify secondary roads in high-resolution aerial orthoimages divided in tiles of 256 × 256 pixels. We will evaluate the framework’s performance on unseen test data and compare the results with those obtained by other popular CNNs trained from scratch.


2020 ◽  
Vol 42 (4-5) ◽  
pp. 213-220 ◽  
Author(s):  
Tomoyuki Fujioka ◽  
Leona Katsuta ◽  
Kazunori Kubota ◽  
Mio Mori ◽  
Yuka Kikuchi ◽  
...  

We aimed to use deep learning with convolutional neural networks (CNNs) to discriminate images of benign and malignant breast masses on ultrasound shear wave elastography (SWE). We retrospectively gathered 158 images of benign masses and 146 images of malignant masses as training data for SWE. A deep learning model was constructed using several CNN architectures (Xception, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, and NASNetMobile) with 50, 100, and 200 epochs. We analyzed SWE images of 38 benign masses and 35 malignant masses as test data. Two radiologists interpreted these test data through a consensus reading using a 5-point visual color assessment (SWEc) and the mean elasticity value (in kPa) (SWEe). Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. The best CNN model (which was DenseNet169 with 100 epochs), SWEc, and SWEe had a sensitivity of 0.857, 0.829, and 0.914 and a specificity of 0.789, 0.737, and 0.763 respectively. The CNNs exhibited a mean AUC of 0.870 (range, 0.844–0.898), and SWEc and SWEe had an AUC of 0.821 and 0.855. The CNNs had an equal or better diagnostic performance compared with radiologist readings. DenseNet169 with 100 epochs, Xception with 50 epochs, and Xception with 100 epochs had a better diagnostic performance compared with SWEc ( P = 0.018–0.037). Deep learning with CNNs exhibited equal or higher AUC compared with radiologists when discriminating benign from malignant breast masses on ultrasound SWE.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Xiaoyan Xu ◽  
Shoushui Wei ◽  
Caiyun Ma ◽  
Kan Luo ◽  
Li Zhang ◽  
...  

Atrial fibrillation (AF) is a serious cardiovascular disease with the phenomenon of beating irregularly. It is the major cause of variety of heart diseases, such as myocardial infarction. Automatic AF beat detection is still a challenging task which needs further exploration. A new framework, which combines modified frequency slice wavelet transform (MFSWT) and convolutional neural networks (CNNs), was proposed for automatic AF beat identification. MFSWT was used to transform 1 s electrocardiogram (ECG) segments to time-frequency images, and then, the images were fed into a 12-layer CNN for feature extraction and AF/non-AF beat classification. The results on the MIT-BIH Atrial Fibrillation Database showed that a mean accuracy (Acc) of 81.07% from 5-fold cross validation is achieved for the test data. The corresponding sensitivity (Se), specificity (Sp), and the area under the ROC curve (AUC) results are 74.96%, 86.41%, and 0.88, respectively. When excluding an extremely poor signal quality ECG recording in the test data, a mean Acc of 84.85% is achieved, with the corresponding Se, Sp, and AUC values of 79.05%, 89.99%, and 0.92. This study indicates that it is possible to accurately identify AF or non-AF ECGs from a short-term signal episode.


2020 ◽  
Vol 10 (1) ◽  
pp. 55-60
Author(s):  
Owais Mujtaba Khanday ◽  
Samad Dadvandipour

Deep Neural Networks (DNN) in the past few years have revolutionized the computer vision by providing the best results on a large number of problems such as image classification, pattern recognition, and speech recognition. One of the essential models in deep learning used for image classification is convolutional neural networks. These networks can integrate a different number of features or so-called filters in a multi-layer fashion called convolutional layers. These models use convolutional, and pooling layers for feature abstraction and have neurons arranged in three dimensions: Height, Width, and Depth. Filters of 3 different sizes were used like 3×3, 5×5 and 7×7. It has been seen that the accuracy on the training data has been decreased from 100% to 97.8% as we increase the filter size and also the accuracy on the test data set decreases for 3×3 it is 98.7%, for 5×5 it is 98.5%, and for 7×7 it is 97.8%. The loss on the training data and test data per 10 epochs could be seen drastically increasing from 3.4% to 27.6% and 12.5% to 23.02%, respectively. Thus it is clear that using the filters having lesser dimensions is giving less loss than those having more dimensions. However, using the smaller filter size comes with the cost of computational complexity, which is very crucial in the case of larger data sets.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document