Re: machine learning “red dot”: open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification

2019 ◽  
Vol 74 (2) ◽  
pp. 161
Author(s):  
S. Halligan ◽  
A.A.O. Plumb
Author(s):  
Fawziya M. Rammo ◽  
Mohammed N. Al-Hamdani

Many languages identification (LID) systems rely on language models that use machine learning (ML) approaches, LID systems utilize rather long recording periods to achieve satisfactory accuracy. This study aims to extract enough information from short recording intervals in order to successfully classify the spoken languages under test. The classification process is based on frames of (2-18) seconds where most of the previous LID systems were based on much longer time frames (from 3 seconds to 2 minutes). This research defined and implemented many low-level features using MFCC (Mel-frequency cepstral coefficients), containing speech files in five languages (English. French, German, Italian, Spanish), from voxforge.org an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper. A CNN (convolutional Neural Networks) algorithm applied in this paper for classification and the result was perfect, binary language classification had an accuracy of 100%, and five languages classification with six languages had an accuracy of 99.8%.


Author(s):  
Alae Chouiekh ◽  
El Hassane Ibn El Haj

Several machine learning models have been proposed to address customer churn problems. In this work, the authors used a novel method by applying deep convolutional neural networks on a labeled dataset of 18,000 prepaid subscribers to classify/identify customer churn. The learning technique was based on call detail records (CDR) describing customers activity during two-month traffic from a real telecommunication provider. The authors use this method to identify new business use case by considering each subscriber as a single input image describing the churning state. Different experiments were performed to evaluate the performance of the method. The authors found that deep convolutional neural networks (DCNN) outperformed other traditional machine learning algorithms (support vector machines, random forest, and gradient boosting classifier) with F1 score of 91%. Thus, the use of this approach can reduce the cost related to customer loss and fits better the churn prediction business use case.


2020 ◽  
Vol 7 (6) ◽  
pp. 1089
Author(s):  
Iwan Muhammad Erwin ◽  
Risnandar Risnandar ◽  
Esa Prakarsa ◽  
Bambang Sugiarto

<p class="Abstrak">Identifikasi kayu salah satu kebutuhan untuk mendukung pemerintah dan kalangan bisnis kayu untuk melakukan perdagangan kayu secara legal. Keahlian khusus dan waktu yang cukup dibutuhkan untuk memproses identifikasi kayu di laboratorium. Beberapa metodologi penelitian sebelumnya, proses identifikasi kayu masih dengan cara menggabungkan sistem manual menggunakan anatomi DNA kayu. Sedangkan penggunaan sistem komputer diperoleh dari citra penampamg melintang kayu secara proses mikrokopis dan makroskopis. Saat ini, telah berkembang teknologi computer vision dan machine learning untuk mengidentifikasi berbagai jenis objek, salah satunya citra kayu. Penelitian ini berkontribusi dalam mengklasifikasi beberapa spesies kayu yang diperdagangkan menggunakan Deep Convolutional Neural Networks (DCNN). Kebaruan penelitian ini terletak pada arsitektur DCNN yang bernama Kayu7Net. Arsitektur Kayu7Net yang diusulkan memiliki tiga lapisan konvolusi terhadap tujuh spesies dataset citra kayu. Pengujian dengan merubah citra input menjadi berukuran 600×600, 300×300, dan 128×128 piksel serta masing-masing diulang pada epoch 50 dan 100. DCNN yang diusulkan menggunakan fungsi aktivasi ReLU dengan batch size 32. ReLU bersifat lebih konvergen dan cepat saat proses iterasi. Sedangkan Fully-Connected (FC) berjumlah 4 lapisan akan menghasilkan proses training yang lebih efisien. Hasil eksperimen memperlihatkan bahwa Kayu7Net yang diusulkan memiliki nilai akurasi sebesar 95,54%, precision sebesar 95,99%, recall sebesar 95,54%, specificity sebesar 99,26% dan terakhir, nilai F-measure sebesar 95,46%. Hasil ini menunjukkan bahwa arsitektur Kayu7Net lebih unggul sebesar 1,49% pada akurasi, 2,49% pada precision, dan 5,26% pada specificity dibandingkan penelitian sebelumnya.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstrak"><em>Wood identification is one of the needs to support the government and the wood business community for a legally wood trading system. Special expertise and sufficient time are needed to process wood identification in the laboratory. Some previous research works show that the process of identifying wood combines a manual system using a wood DNA anatomy. While, the use of a computer system is obtained from the wood image of microscopic and macroscopic process. Recently, the latest technology has developed by using the machine learning and computer vision to identify many objects, the one of them is wood image. This research contributes to classify several the traded wood species by using Deep Convolutional Neural Networks (DCNN). The novelty of this research is in the DCNN architecture, namely Kayu7Net. The proposed of Kayu7Net Architecture has three convolution layers of the seven species wood image dataset. The testing changes the wood image input to 600×600, 300×300, and 128×128 pixel, respectively, and each of them repeated until 50 and 100 epoches, respectively. The proposed DCNN uses the ReLU activation function and batch size 32. The ReLU is more convergent and faster during the iteration process. Whereas, the 4 layers of Fully-Connected (FC) will produce a more efficient training process. The experimental results show that the proposed Kayu7Net has an accuracy value of 95.54%, a precision of 95.99%, a recall of 95.54%, a specificity of 99.26% and finally, an F-measure value of 95.46%. These results indicate that Kayu7Net is superior by 1.49% of accuracy, 2.49% of precision, and 5.26% of specificity compared to the previous work. </em></p><p class="Abstrak"> </p>


AI ◽  
2020 ◽  
Vol 1 (3) ◽  
pp. 361-375
Author(s):  
Lovemore Chipindu ◽  
Walter Mupangwa ◽  
Jihad Mtsilizah ◽  
Isaiah Nyagumbo ◽  
Mainassara Zaman-Allah

Maize kernel traits such as kernel length, kernel width, and kernel number determine the total kernel weight and, consequently, maize yield. Therefore, the measurement of kernel traits is important for maize breeding and the evaluation of maize yield. There are a few methods that allow the extraction of ear and kernel features through image processing. We evaluated the potential of deep convolutional neural networks and binary machine learning (ML) algorithms (logistic regression (LR), support vector machine (SVM), AdaBoost (ADB), Classification tree (CART), and the K-Neighbor (kNN)) for accurate maize kernel abortion detection and classification. The algorithms were trained using 75% of 66 total images, and the remaining 25% was used for testing their performance. Confusion matrix, classification accuracy, and precision were the major metrics in evaluating the performance of the algorithms. The SVM and LR algorithms were highly accurate and precise (100%) under all the abortion statuses, while the remaining algorithms had a performance greater than 95%. Deep convolutional neural networks were further evaluated using different activation and optimization techniques. The best performance (100% accuracy) was reached using the rectifier linear unit (ReLu) activation procedure and the Adam optimization technique. Maize ear with abortion were accurately detected by all tested algorithms with minimum training and testing time compared to ear without abortion. The findings suggest that deep convolutional neural networks can be used to detect the maize ear abortion status supplemented with the binary machine learning algorithms in maize breading programs. By using a convolution neural network (CNN) method, more data (big data) can be collected and processed for hundreds of maize ears, accelerating the phenotyping process.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document