scholarly journals Finding the Differences in Capillaries of Taste Buds between Smokers and Non-Smokers Using the Convolutional Neural Networks

2021 ◽  
Vol 11 (8) ◽  
pp. 3460
Author(s):  
Hang Nguyen Thi Phuong ◽  
Choonsung Shin ◽  
Hieyong Jeong

Taste function and condition may be a tool that exhibits a rapid deficit to impress the subject with an objectively measured effect of smoking on his/her own body, because smokers exhibit significantly lower taste sensitivity than non-smokers. This study proposed a visual method to measure capillaries of taste buds with capillaroscopy and classified the difference between smokers and non-smokers through convolutional neural networks (CNNs). The dataset was collected from 26 human subjects through the capillaroscopy with the low and high magnification directly; of which 13 were smokers, and the other 13 were non-smokers. The acquired dataset consisted of 2600 images. The results of gradient-weighted class activation mapping (grad-cam) enabled us to understand the difference in capillaries of taste buds between smokers and non-smokers. Through the results, it was found that CNNs gave us a good performance with 79% accuracy. It was discussed that there was a shortage of extracted features when the conventional methods such as structural similarity index (SSIM) and scale-invariant feature transform (SIFT) were used to classify.

2020 ◽  
Vol 9 (1) ◽  
pp. 7-10
Author(s):  
Hendry Fonda

ABSTRACT Riau batik is known since the 18th century and is used by royal kings. Riau Batik is made by using a stamp that is mixed with coloring and then printed on fabric. The fabric used is usually silk. As its development, comparing Javanese  batik with riau batik Riau is very slowly accepted by the public. Convolutional Neural Networks (CNN) is a combination of artificial neural networks and deeplearning methods. CNN consists of one or more convolutional layers, often with a subsampling layer followed by one or more fully connected layers as a standard neural network. In the process, CNN will conduct training and testing of Riau batik so that a collection of batik models that have been classified based on the characteristics that exist in Riau batik can be determined so that images are Riau batik and non-Riau batik. Classification using CNN produces Riau batik and not Riau batik with an accuracy of 65%. Accuracy of 65% is due to basically many of the same motifs between batik and other batik with the difference lies in the color of the absorption in the batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning   ABSTRAK   Batik Riau dikenal sejak abad ke 18 dan digunakan oleh bangsawan raja. Batik Riau dibuat dengan menggunakan cap yang dicampur dengan pewarna kemudian dicetak di kain. Kain yang digunakan biasanya sutra. Seiring perkembangannya, dibandingkan batik Jawa maka batik Riau sangat lambat diterima oleh masyarakat. Convolutional Neural Networks (CNN) merupakan kombinasi dari jaringan syaraf tiruan dan metode deeplearning. CNN terdiri dari satu atau lebih lapisan konvolutional, seringnya dengan suatu lapisan subsampling yang diikuti oleh satu atau lebih lapisan yang terhubung penuh sebagai standar jaringan syaraf. Dalam prosesnya CNN akan melakukan training dan testing terhadap batik Riau sehingga didapat kumpulan model batik yang telah terklasi    fikasi berdasarkan ciri khas yang ada pada batik Riau sehingga dapat ditentukan gambar (image) yang merupakan batik Riau dan yang bukan merupakan batik Riau. Klasifikasi menggunakan CNN menghasilkan batik riau dan bukan batik riau dengan akurasi 65%. Akurasi 65% disebabkan pada dasarnya banyak motif yang sama antara batik riau dengan batik lainnya dengan perbedaan terletak pada warna cerap pada batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Diego José Luis Botia Valderrama ◽  
Natalia Gaviria Gómez

The measurement and evaluation of the QoE (Quality of Experience) have become one of the main focuses in the telecommunications to provide services with the expected quality for their users. However, factors like the network parameters and codification can affect the quality of video, limiting the correlation between the objective and subjective metrics. The above increases the complexity to evaluate the real quality of video perceived by users. In this paper, a model based on artificial neural networks such as BPNNs (Backpropagation Neural Networks) and the RNNs (Random Neural Networks) is applied to evaluate the subjective quality metrics MOS (Mean Opinion Score) and the PSNR (Peak Signal Noise Ratio), SSIM (Structural Similarity Index Metric), VQM (Video Quality Metric), and QIBF (Quality Index Based Frame). The proposed model allows establishing the QoS (Quality of Service) based in the strategyDiffserv. The metrics were analyzed through Pearson’s and Spearman’s correlation coefficients, RMSE (Root Mean Square Error), and outliers rate. Correlation values greater than 90% were obtained for all the evaluated metrics.


Geosciences ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 336
Author(s):  
Rafael Pires de Lima ◽  
David Duarte

Convolutional neural networks (CNN) are currently the most widely used tool for the classification of images, especially if such images have large within- and small between- group variance. Thus, one of the main factors driving the development of CNN models is the creation of large, labelled computer vision datasets, some containing millions of images. Thanks to transfer learning, a technique that modifies a model trained on a primary task to execute a secondary task, the adaptation of CNN models trained on such large datasets has rapidly gained popularity in many fields of science, geosciences included. However, the trade-off between two main components of the transfer learning methodology for geoscience images is still unclear: the difference between the datasets used in the primary and secondary tasks; and the amount of available data for the primary task itself. We evaluate the performance of CNN models pretrained with different types of image datasets—specifically, dermatology, histology, and raw food—that are fine-tuned to the task of petrographic thin-section image classification. Results show that CNN models pretrained on ImageNet achieve higher accuracy due to the larger number of samples, as well as a larger variability in the samples in ImageNet compared to the other datasets evaluated.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1175 ◽  
Author(s):  
Jing Han ◽  
Jian Yao ◽  
Jiao Zhao ◽  
Jingmin Tu ◽  
Yahui Liu

License plate detection (LPD) is the first and key step in license plate recognition.State-of-the-art object-detection algorithms based on deep learning provide a promising form ofLPD. However, there still exist two main challenges. First, existing methods often enclose objectswith horizontal rectangles. However, horizontal rectangles are not always suitable since licenseplates in images are multi-oriented, reflected by rotation and perspective distortion. Second, thescale of license plates often varies, leading to the difficulty of multi-scale detection. To addressthe aforementioned problems, we propose a novel method of multi-oriented and scale-invariantlicense plate detection (MOSI-LPD) based on convolutional neural networks. Our MOSI-LPD tightlyencloses the multi-oriented license plates with bounding parallelograms, regardless of the licenseplate scales. To obtain bounding parallelograms, we first parameterize the edge points of licenseplates by relative positions. Next, we design mapping functions between oriented regions andhorizontal proposals. Then, we enforce the symmetry constraints in the loss function and train themodel with a multi-task loss. Finally, we map region proposals to three edge points of a nearby licenseplate, and infer the fourth point to form bounding parallelograms. To achieve scale invariance, wefirst design anchor boxes based on inherent shapes of license plates. Next, we search different layersto generate region proposals with multiple scales. Finally, we up-sample the last layer and combineproposal features extracted from different layers to recognize true license plates. Experimental resultshave demonstrated that the proposed method outperforms existing approaches in terms of detectinglicense plates with different orientations and multiple scales.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1979
Author(s):  
Wazir Muhammad ◽  
Zuhaibuddin Bhutto ◽  
Arslan Ansari ◽  
Mudasar Latif Memon ◽  
Ramesh Kumar ◽  
...  

Recent research on single-image super-resolution (SISR) using deep convolutional neural networks has made a breakthrough and achieved tremendous performance. Despite their significant progress, numerous convolutional neural networks (CNN) are limited in practical applications, owing to the requirement of the heavy computational cost of the model. This paper proposes a multi-path network for SISR, known as multi-path deep CNN with residual inception network for single image super-resolution. In detail, a residual/ResNet block with an Inception block supports the main framework of the entire network architecture. In addition, remove the batch normalization layer from the residual network (ResNet) block and max-pooling layer from the Inception block to further reduce the number of parameters to preventing the over-fitting problem during the training. Moreover, a conventional rectified linear unit (ReLU) is replaced with Leaky ReLU activation function to speed up the training process. Specifically, we propose a novel two upscale module, which adopts three paths to upscale the features by jointly using deconvolution and upsampling layers, instead of using single deconvolution layer or upsampling layer alone. The extensive experimental results on image super-resolution (SR) using five publicly available test datasets, which show that the proposed model not only attains the higher score of peak signal-to-noise ratio/structural similarity index matrix (PSNR/SSIM) but also enables faster and more efficient calculations against the existing image SR methods. For instance, we improved our method in terms of overall PSNR on the SET5 dataset with challenging upscale factor 8× as 1.88 dB over the baseline bicubic method and reduced computational cost in terms of number of parameters 62% by deeply-recursive convolutional neural network (DRCN) method.


2019 ◽  
Vol 9 (23) ◽  
pp. 5065
Author(s):  
Gabriel Rojas-Albarracín ◽  
Miguel Ángel Chaves ◽  
Antonio Fernández-Caballero ◽  
María T. López

Cardiovascular diseases are the leading cause of death worldwide. Therefore, getting help in time makes the difference between life and death. In many cases, help is not obtained in time when a person is alone and suffers a heart attack. This is mainly due to the fact that pain prevents him/her from asking for help. This article presents a novel proposal to identify people with an apparent heart attack in colour images by detecting characteristic postures of heart attack. The method of identifying infarcts makes use of convolutional neural networks. These have been trained with a specially prepared set of images that contain people simulating a heart attack. The promising results in the classification of infarcts show 91.75% accuracy and 92.85% sensitivity.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3874
Author(s):  
Nagesh Subbanna ◽  
Matthias Wilms ◽  
Anup Tuladhar ◽  
Nils D. Forkert

Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of 0.73±0.12 and 0.61±0.12 for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1676
Author(s):  
Philipp Sager ◽  
Lukas Näf ◽  
Erwin Vu ◽  
Tim Fischer ◽  
Paul M. Putora ◽  
...  

Introduction: Many proposed algorithms for tumor detection rely on 2.5/3D convolutional neural networks (CNNs) and the input of segmentations for training. The purpose of this study is therefore to assess the performance of tumor detection on single MRI slices containing vestibular schwannomas (VS) as a computationally inexpensive alternative that does not require the creation of segmentations. Methods: A total of 2992 T1-weighted contrast-enhanced axial slices containing VS from the MRIs of 633 patients were labeled according to tumor location, of which 2538 slices from 539 patients were used for training a CNN (ResNet-34) to classify them according to the side of the tumor as a surrogate for detection and 454 slices from 94 patients were used for internal validation. The model was then externally validated on contrast-enhanced and non-contrast-enhanced slices from a different institution. Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results: The model achieved an accuracy of 0.928 (95% CI: 0.869–0.987) on contrast-enhanced slices and 0.795 (95% CI: 0.702–0.888) on non-contrast-enhanced slices from the external validation cohorts. The implementation of Gradient-weighted Class Activation Mapping (Grad-CAM) revealed that the focus of the model was not limited to the contrast-enhancing tumor but to a larger area of the cerebellum and the cerebellopontine angle. Conclusions: Single-slice predictions might constitute a computationally inexpensive alternative to training 2.5/3D-CNNs for certain detection tasks in medical imaging even without the use of segmentations. Head-to-head comparisons between 2D and more sophisticated architectures could help to determine the difference in accuracy, especially for more difficult tasks.


Stroke ◽  
2017 ◽  
Vol 48 (suppl_1) ◽  
Author(s):  
Stefan Winzeck ◽  
Mark J Bouts ◽  
Elissa McIntosh ◽  
Raquel Bezerra ◽  
Izzuddin Diwan ◽  
...  

Background: In acute ischemic stroke (AIS), therapeutic decisions are increasingly being based upon the volume of likely-unsalvageable brain tissue, which is often estimated using DWI. Deep learning algorithms, e.g. convolutional neural networks (CNN), have been employed for chronic stroke lesion segmentation. Here we investigate the applicability of CNN for DWI lesion measurement in acute stroke. Methods: 50 AIS patients underwent DWI < 12h from last known well. Apparent diffusion coefficient maps, T2WI, and DWI were used as covariates in a 2D CNN (5-fold cross validation). Including convolutional, inception and fully connected dense layers, a CNN of 15 layers was trained using manually outlined DWI lesions. To avoid overfitting, statistical dropout, L1- and L2-regularization and batch-normalization were used. Automatically segmented lesion volumes (ALV) using a 50% risk threshold were compared to the manual lesion volumes (MLV) using Dice similarity index (DSI, a measure of overlap) and Spearman’s correlation coefficient. Subset analysis was performed evaluating results between small (<10 ml) and large lesions (Wilcoxon rank sums). Results: The figure shows examples of CNN segmentation. The median [IQR] measured lesion volume and DSI were 25 [13-46] mL and 66% [35-75%], respectively. The correlation of MLV with ALV was 86% (P<0.001). 21 subjects (42%) had lesion volumes less than 10 ml. DSI for small lesions (28% [14-46%]) was significantly lower (P<0.001) than large lesions (73% [67-79]%). Correlation of ALV with MLV for small lesions compared to large lesions were 31 and 84 respectively and differed significantly (P=0.001). Discussion: Automatic DWI lesion segmentation for large lesions is feasible using CNN. CNN tended to overestimate the volumes of small lesions. Prior methods have used a priori heuristics or morphometric operations to remove artifacts. CNN methods show promise for “learning” to discriminate artifacts from real lesions.


Sign in / Sign up

Export Citation Format

Share Document