scholarly journals Klasifikasi Penyakit Pada Citra Daun Jeruk Menggunakan Arsitektur MobileNet berbasis Mobile Platform

Author(s):  
Riswandi Riswandi ◽  
Rosmiati Jamiah ◽  
Nisa Mardhatillah ◽  
Hady Prasetya Hamid

Proses diagnosa penyakit pada tanaman daun jeruk umunya dilakukan melalui pemeriksaan laboratorium oleh seorang ahli patologi tumbuhan dengan melihat gejala visual yang timbul pada tanaman untuk membantu petani daun jeruk dalam memberikan pola penanganan yang tepat berdasarkan gejala yang nampak pada kondisi daun jeruk. Klasifikasi penyakit daun jeruk menggunakan metode pengolahan citra mampu memberikan referensi diagnosis yang cepat dan akurat. Penelitian ini mengusulkan pendekatan deep learning menggunakan arsitektur MobineNet CNN untuk melakukan klasifikasi. Metode pada penelitian ini dievaluasi pada citra penyakit daun jeruk dalam tiga kategori yaitu normal, HLB dan CTV dengan ukuran citra 150x150. Pengujian dilakukan dengan menggunakan algortima RMSprof optimize dengan learning rate 0.001. Proses pelatihan  menggunakan arsitektur Binary Cross Entropy fungsi aktivasi sigmoid. Hasi klasifikasi penyakit pada citra daun jeruk pada proses training mencapai tingkat akurasi 98% pada epoch 15.

2021 ◽  
pp. 923
Author(s):  
Prayudha Hartanto ◽  
Nugroho Purwono ◽  
Danang Budi Susetyo ◽  
Fahrul Hidayat ◽  
Mochamad Irwan Hariyono

Teknologi kecerdasan buatan adalah sebuah inovasi mutakhir yang mengandalkan peran komputer untuk mengenali dan memprediksi berbagai objek yang menjadi perhatian, dalam hal ini adalah fitur bangunan pada peta Rupabumi Indonesia (RBI) skala besar. Teknologi ini memiliki cakupan yang sangat luas, dan dalam penelitian ini akan dibahas aplikasi salah satu cabang kecerdasan buatan yang paling kompleks, yakni deep learning. Metode deep learning yang digunakan dalam ekstraksi fitur bangunan pada peta RBI skala besar adalah semantic segmentation, dimana objek tidak hanya dideteksi, namun juga disegmentasi bagian tepinya tanpa memperhatikan unit satuan bangunan sehingga diperoleh hasil-hasil berupa fitur bangunan dalam satu kesatuan segmen dan fitur selain bangunan menjadi segmen lainnya. Algoritma semantic segmentation yang dipilih adalah Unet yang dibagi ke dalam arsitektur Small Unet dan Full Unet. Arsitektur Small Unet menggunakan 18 layer konvolusi sedangkan Full Unet menggunakan 19 layer. Data training yang digunakan adalah data UAV wilayah Kantor Badan Informasi Geospasial (BIG), foto udara Wuhan University, dan foto udara kota Austin Texas. Rasio data training-testing yang digunakan adalah 80%:20%, dengan learning rate 10-4, fungsi optimasi Adams dan fungsi loss binary cross-entropy. Proses pembuatan model (training) dilakukan menggunakan perangkat lunak Tensorflow yang dijalankan dalam platform Google Colaboratory. Arsitektur Small Unet memberikan hasil 0,119 untuk model loss; 0,932 untuk akurasi piksel dan 0,698 untuk mean Intersection over Union (IoU). Sementara itu arsitektur Full Unet memberikan hasil yang relatif lebih baik yakni 0,112; 0,943; dan 0,773 masing-masing untuk model loss, akurasi piksel dan IoU.


Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


Author(s):  
O.E. Apolo-Apolo ◽  
D. Andujar-Sánchez ◽  
D. Reiser ◽  
M. Pérez-Ruiz ◽  
J. Martínez-Guanter

Author(s):  
Shelvi Nur Rahmawati ◽  
Eka Wahyu Hidayat ◽  
Husni Mubarok

Aksara Sunda merupakan salah satu aksara daerah Indonesia khususnya masyarakat Sunda. Seiring dengan perkembangan teknologi seperti sekarang ini, bahasa daerah pun semakin tergerus dari waktu kewaktu. Aksara Sunda pun mulai terlupakan, bahkan jarang digunakan oleh masyarakat Sunda dalam kehidupan sehari-hari serta kurangnya memahami Bahasa daerahnya sendiri. Oleh karena itu, perlu adanya pelestarian Bahasa daerah yang dikembangkan menyesuaikan perkembangan jaman agar bisa terus dikenal dan dilestarikan, salahsatunya dengan identifikasi aksara Sunda menggunakan metode Convolutional Neural Network (CNN). Convolutional Neural Network (CNN) adalah bagian dari deep learning yang biasanya digunakan dalam pengolahan data gambar. Hasil dari penelitian ini  menggunakan optimasi ADAM dengan penggunaan epoch 20, 50, 100 dan 500. Penggunaan epoch 500, learning rate 0.1 merupakan nilai tertinggi dengan akurasi 98.03%. Berdasarkan hasil data training dengan nilai epoch 100, learning rate 0.001 hasil akurasi sebesar 96.71% data training dan 92.02% data testing.


2018 ◽  
Vol 18 (01) ◽  
pp. 22-27 ◽  
Author(s):  
Royani Darma Nurfita ◽  
Gunawan Ariyanto

Sistem pengenalan sidik jari banyak digunakan dala bidang biometrik untuk berbagai keperluan pada beberapa tahun terakhir ini. Pengenalan sidik jari digunakan karena memiliki pola yang rumit yang dapat mengenali seseorang dan merupakan identitas setiap manusia. Sidik jari juga banyak digunakan sebagai verifikasi maupun identifikasi. Permasalahan yang dihadapi dalam penelitian ini adalah komputer sulit melakukan klasifikasi objek salah satunya pada sidikjari. Dalam penelitian ini penulismenggunakan deep learning yang menggunakan metode Convolutional Neural Network (CNN) untuk mengatasi masalah tersebut. CNN digunakan untuk melakukan proses pembelajaran mesin pada komputer. Tahapan pada CNN adalah input data, preprocessing, proses training. Implementasi CNN yang digunakan library tensorflow dengan menggunakan bahasa pemrograman python. Dataset yang digunakan bersumber dari sebuah website kompetisi verifikasi sidik jari pada tahun 2004 yang menggunakan sensor bertipe opticalsensor “V300” by crossMatch dan didalamnya terdapat 80 gambar sidik jari. Proses pelatihan menggunakan data yang berukuran 24x24 pixel dan melakukan pengujian dengan membandingkan jumlah epoch dan learning rate sehingga diketahui bahwa jika semakin besar jumlah epoch dan semakin kecil learning rate maka semakin baik tingkat akurasi pelatihan yang didapatkan. Pada penelitian ini tingkat akurasi pelatihan yang dicapai sebesar 100%


Memory management is very essential task for large-scale storage systems; in mobile platform generate storage errors due to insufficient memory as well as additional task overhead. Many existing systems have illustrated different solution for such issues, like load balancing and load rebalancing. Different unusable applications which are already installed in mobile platform user never access frequently but it allocates some memory space on hard device storage. In the proposed research work we describe dynamic resource allocation for mobile platforms using deep learning approach. In Real world mobile systems users may install different kind of applications which required ad-hoc basis. Such applications may be affect to execution performance of system as well space complexity, sometime they also affect another runnable applications performance. To eliminate of such issues, we carried out an approach to allocate runtime resources for data storage for mobile platform. When system connected with cloud data server it store complete file system on remote Virtual Machine (VM) and whenever a single application required which immediately install beginning as remote server to local device. For developed of proposed system we implemented deep learning base Convolutional Neural Network (CNN), algorithm has used with tensorflow environment which reduces the time complexity for data storage as well as extraction respectively.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
Hussein Hoteit

Abstract Automatic fracture recognition from borehole images or outcrops is applicable for the construction of fractured reservoir models. Deep learning for fracture recognition is subject to uncertainty due to sparse and imbalanced training set, and random initialization. We present a new workflow to optimize a deep learning model under uncertainty using U-Net. We consider both epistemic and aleatoric uncertainty of the model. We propose a U-Net architecture by inserting dropout layer after every "weighting" layer. We vary the dropout probability to investigate its impact on the uncertainty response. We build the training set and assign uniform distribution for each training parameter, such as the number of epochs, batch size, and learning rate. We then perform uncertainty quantification by running the model multiple times for each realization, where we capture the aleatoric response. In this approach, which is based on Monte Carlo Dropout, the variance map and F1-scores are utilized to evaluate the need to craft additional augmentations or stop the process. This work demonstrates the existence of uncertainty within the deep learning caused by sparse and imbalanced training sets. This issue leads to unstable predictions. The overall responses are accommodated in the form of aleatoric uncertainty. Our workflow utilizes the uncertainty response (variance map) as a measure to craft additional augmentations in the training set. High variance in certain features denotes the need to add new augmented images containing the features, either through affine transformation (rotation, translation, and scaling) or utilizing similar images. The augmentation improves the accuracy of the prediction, reduces the variance prediction, and stabilizes the output. Architecture, number of epochs, batch size, and learning rate are optimized under a fixed-uncertain training set. We perform the optimization by searching the global maximum of accuracy after running multiple realizations. Besides the quality of the training set, the learning rate is the heavy-hitter in the optimization process. The selected learning rate controls the diffusion of information in the model. Under the imbalanced condition, fast learning rates cause the model to miss the main features. The other challenge in fracture recognition on a real outcrop is to optimally pick the parental images to generate the initial training set. We suggest picking images from multiple sides of the outcrop, which shows significant variations of the features. This technique is needed to avoid long iteration within the workflow. We introduce a new approach to address the uncertainties associated with the training process and with the physical problem. The proposed approach is general in concept and can be applied to various deep-learning problems in geoscience.


2019 ◽  
Vol 110 ◽  
pp. 225-231 ◽  
Author(s):  
Huizhen Zhao ◽  
Fuxian Liu ◽  
Han Zhang ◽  
Zhibing Liang

2020 ◽  
Vol 14 ◽  
Author(s):  
Yaqing Zhang ◽  
Jinling Chen ◽  
Jen Hong Tan ◽  
Yuxuan Chen ◽  
Yunyi Chen ◽  
...  

Emotion is the human brain reacting to objective things. In real life, human emotions are complex and changeable, so research into emotion recognition is of great significance in real life applications. Recently, many deep learning and machine learning methods have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage in that the feature extraction process is usually cumbersome, which relies heavily on human experts. Then, end-to-end deep learning methods emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid model of CNN and LSTM (CNN-LSTM). The experiments were carried on the well-known DEAP dataset. Experimental results show that the CNN and CNN-LSTM models had high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reached 90.12 and 94.17%, respectively. The performance of the DNN model was not as accurate as other models, but the training speed was fast. The LSTM model was not as stable as the CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of the LSTM was much slower and it was difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, and dropout probability, were also conducted in the paper. Comparison results prove that the DNN model converged to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needed more epochs to learn. As for dropout probability, reducing the parameters by ~50% each time was appropriate.


Sign in / Sign up

Export Citation Format

Share Document