scholarly journals STUDI KESESUAIAN HABITAT Dicksonia blumei (Kunze) T.Moore DENGAN PENDEKATAN PENGINDRAAN JAUH DI KAWASAN HUTAN BUKIT TAPAK, BEDUGUL, BALI

2021 ◽  
Vol 24 (2) ◽  
pp. 93-103
Author(s):  
I Dewa Putu Darma ◽  
Rajif Iryadi ◽  
Sutomo

Dicksonia blumei (Kunze) T.Moore merupakan salah satu jenis paku pohon yang diprioritaskan untuk dikonservasi sebagaimana yang diamanatkan di dalam CITES appendix II. Salah satu sebaran alaminya adalah Kepulauan Sunda Kecil dimana tercatat ada sepuluh spesimen D. blumei di Bali (Batukaru dan Bedugul). Penelitian ini bertujuan mendapatkan informasi mengenai kesesuaian habitat dan arahan lokasi untuk reintroduksi jenis D. blumei di Bukit Tapak, Bedugul, Bali. Permodelan dilakukan dengan metode maksimum entropi (Maxent). Data yang digunakan dalam penelitian adalah topografi, iklim dan tanah dimana tersebar titik D. blumei di Bali. Data tersebut kemudian digabungkan dengan data keberadaan Alsophila latebrosa sebagai salah satu inang tumbuh dari D. blumei di alam. Performa model menunjukkan hasil yang luar biasa dengan nilai training data Area Under the Curve (AUC) sebesar 0,997 dan nilai test data AUC sebesar 0,967. Variabel iklim yang paling dominan adalah b10 (rerata suhu pada quartal terpanas) yaitu 25,8%. Zonasi kesesuaian habitat D. blumei juga cukup luas yaitu ± 15 km2 pada kawasan Bedugul (Kabupaten Tabanan dan Buleleng). Detail titik lokasi untuk arahan reintroduksi/restorasi didapatkan dengan menggunakan interpretasi citra Pleaides melalui proses perhitungan statistik spectral library dari kanopi A. latebrosa. Hasil deteksi dengan pendekatan interpretasi citra Pleiades diperoleh akurasi sebesar 88%. Hasil penggabungan informasi kesesuaian habitat D. blumei dan titik sebaran A. latebrosa menunjukkan 28 titik lokasi di bagian barat daya hingga barat laut Bukit Tapak yang diprediksi sesuai sebagai lokasi reintroduksi D. blumei.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Young Jae Kim ◽  
Jang Pyo Bae ◽  
Jun-Won Chung ◽  
Dong Kyun Park ◽  
Kwang Gi Kim ◽  
...  

AbstractWhile colorectal cancer is known to occur in the gastrointestinal tract. It is the third most common form of cancer of 27 major types of cancer in South Korea and worldwide. Colorectal polyps are known to increase the potential of developing colorectal cancer. Detected polyps need to be resected to reduce the risk of developing cancer. This research improved the performance of polyp classification through the fine-tuning of Network-in-Network (NIN) after applying a pre-trained model of the ImageNet database. Random shuffling is performed 20 times on 1000 colonoscopy images. Each set of data are divided into 800 images of training data and 200 images of test data. An accuracy evaluation is performed on 200 images of test data in 20 experiments. Three compared methods were constructed from AlexNet by transferring the weights trained by three different state-of-the-art databases. A normal AlexNet based method without transfer learning was also compared. The accuracy of the proposed method was higher in statistical significance than the accuracy of four other state-of-the-art methods, and showed an 18.9% improvement over the normal AlexNet based method. The area under the curve was approximately 0.930 ± 0.020, and the recall rate was 0.929 ± 0.029. An automatic algorithm can assist endoscopists in identifying polyps that are adenomatous by considering a high recall rate and accuracy. This system can enable the timely resection of polyps at an early stage.


2021 ◽  
Author(s):  
Octavian Dumitru ◽  
Gottfried Schwarz ◽  
Mihai Datcu ◽  
Dongyang Ao ◽  
Zhongling Huang ◽  
...  

<p>During the last years, much progress has been reached with machine learning algorithms. Among the typical application fields of machine learning are many technical and commercial applications as well as Earth science analyses, where most often indirect and distorted detector data have to be converted to well-calibrated scientific data that are a prerequisite for a correct understanding of the desired physical quantities and their relationships.</p><p>However, the provision of sufficient calibrated data is not enough for the testing, training, and routine processing of most machine learning applications. In principle, one also needs a clear strategy for the selection of necessary and useful training data and an easily understandable quality control of the finally desired parameters.</p><p>At a first glance, one could guess that this problem could be solved by a careful selection of representative test data covering many typical cases as well as some counterexamples. Then these test data can be used for the training of the internal parameters of a machine learning application. At a second glance, however, many researchers found out that a simple stacking up of plain examples is not the best choice for many scientific applications.</p><p>To get improved machine learning results, we concentrated on the analysis of satellite images depicting the Earth’s surface under various conditions such as the selected instrument type, spectral bands, and spatial resolution. In our case, such data are routinely provided by the freely accessible European Sentinel satellite products (e.g., Sentinel-1, and Sentinel-2). Our basic work then included investigations of how some additional processing steps – to be linked with the selected training data – can provide better machine learning results.</p><p>To this end, we analysed and compared three different approaches to find out machine learning strategies for the joint selection and processing of training data for our Earth observation images:</p><ul><li>One can optimize the training data selection by adapting the data selection to the specific instrument, target, and application characteristics [1].</li> <li>As an alternative, one can dynamically generate new training parameters by Generative Adversarial Networks. This is comparable to the role of a sparring partner in boxing [2].</li> <li>One can also use a hybrid semi-supervised approach for Synthetic Aperture Radar images with limited labelled data. The method is split in: polarimetric scattering classification, topic modelling for scattering labels, unsupervised constraint learning, and supervised label prediction with constraints [3].</li> </ul><p>We applied these strategies in the ExtremeEarth sea-ice monitoring project (http://earthanalytics.eu/). As a result, we can demonstrate for which application cases these three strategies will provide a promising alternative to a simple conventional selection of available training data.</p><p>[1] C.O. Dumitru et. al, “Understanding Satellite Images: A Data Mining Module for Sentinel Images”, Big Earth Data, 2020, 4(4), pp. 367-408.</p><p>[2] D. Ao et. al., “Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X”, Remote Sensing, 2018, 10(10), pp. 1-23.</p><p>[3] Z. Huang, et. al., "HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Images", IEEE Transactions on Geoscience and Remote Sensing, 2020, pp.1-18.</p>


2021 ◽  
Vol 10 (1) ◽  
pp. 105
Author(s):  
I Gusti Ayu Purnami Indryaswari ◽  
Ida Bagus Made Mahendra

Many Indonesian people, especially in Bali, make pigs as livestock. Pig livestock are susceptible to various types of diseases and there have been many cases of pig deaths due to diseases that cause losses to breeders. Therefore, the author wants to create an Android-based application that can predict the type of disease in pigs by applying the C4.5 Algorithm. The C4.5 algorithm is an algorithm for classifying data in order to obtain a rule that is used to predict something. In this study, 50 training data sets were used with 8 types of diseases in pigs and 31 symptoms of disease. which is then inputted into the system so that the data is processed so that the system in the form of an Android application can predict the type of disease in pigs. In the testing process, it was carried out by testing 15 test data sets and producing an accuracy value that is 86.7%. In testing the application features built using the Kotlin programming language and the SQLite database, it has been running as expected.


Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.


2016 ◽  
Vol 3 (3) ◽  
pp. 25-44 ◽  
Author(s):  
Omisore Olatunji Mumini ◽  
Fayemiwo Michael Adebisi ◽  
Ofoegbu Osita Edward ◽  
Adeniyi Shukurat Abidemi

Stock trading, used to predict the direction of future stock prices, is a dynamic business primarily based on human intuition. This involves analyzing some non-linear fundamental and technical stock variables which are recorded periodically. This study presents the development of an ANN-based prediction model for forecasting closing price in the stock markets. The major steps taken are identification of technical variables used for prediction of stock prices, collection and pre-processing of stock data, and formulation of the ANN-based predictive model. Stock data of periods between 2010 and 2014 were collected from the Nigerian Stock Exchange (NSE) and stored in a database. The data collected were classified into training and test data, where the training data was used to learn non-linear patterns that exist in the dataset; and test data was used to validate the prediction accuracy of the model. Evaluation results obtained from WEKA shows that discrepancies between actual and predicted values are insignificant.


2021 ◽  
Author(s):  
Louise Bloch ◽  
Christoph M. Friedrich

Abstract Background: The prediction of whether Mild Cognitive Impaired (MCI) subjects will prospectively develop Alzheimer's Disease (AD) is important for the recruitment and monitoring of subjects for therapy studies. Machine Learning (ML) is suitable to improve early AD prediction. The etiology of AD is heterogeneous, which leads to noisy data sets. Additional noise is introduced by multicentric study designs and varying acquisition protocols. This article examines whether an automatic and fair data valuation method based on Shapley values can identify subjects with noisy data. Methods: An ML-workow was developed and trained for a subset of the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. The validation was executed for an independent ADNI test data set and for the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) cohort. The workow included volumetric Magnetic Resonance Imaging (MRI) feature extraction, subject sample selection using data Shapley, Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) for model training and Kernel SHapley Additive exPlanations (SHAP) values for model interpretation. This model interpretation enables clinically relevant explanation of individual predictions. Results: The XGBoost models which excluded 116 of the 467 subjects from the training data set based on their Logistic Regression (LR) data Shapley values outperformed the models which were trained on the entire training data set and which reached a mean classification accuracy of 58.54 % by 14.13 % (8.27 percentage points) on the independent ADNI test data set. The XGBoost models, which were trained on the entire training data set reached a mean accuracy of 60.35 % for the AIBL data set. An improvement of 24.86 % (15.00 percentage points) could be reached for the XGBoost models if those 72 subjects with the smallest RF data Shapley values were excluded from the training data set. Conclusion: The data Shapley method was able to improve the classification accuracies for the test data sets. Noisy data was associated with the number of ApoEϵ4 alleles and volumetric MRI measurements. Kernel SHAP showed that the black-box models learned biologically plausible associations.


Tomography ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 131-141
Author(s):  
Kanae Takahashi ◽  
Tomoyuki Fujioka ◽  
Jun Oyama ◽  
Mio Mori ◽  
Emi Yamaga ◽  
...  

Deep learning (DL) has become a remarkably powerful tool for image processing recently. However, the usefulness of DL in positron emission tomography (PET)/computed tomography (CT) for breast cancer (BC) has been insufficiently studied. This study investigated whether a DL model using images with multiple degrees of PET maximum-intensity projection (MIP) images contributes to increase diagnostic accuracy for PET/CT image classification in BC. We retrospectively gathered 400 images of 200 BC and 200 non-BC patients for training data. For each image, we obtained PET MIP images with four different degrees (0°, 30°, 60°, 90°) and made two DL models using Xception. One DL model diagnosed BC with only 0-degree MIP and the other used four different degrees. After training phases, our DL models analyzed test data including 50 BC and 50 non-BC patients. Five radiologists interpreted these test data. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. Our 4-degree model, 0-degree model, and radiologists had a sensitivity of 96%, 82%, and 80–98% and a specificity of 80%, 88%, and 76–92%, respectively. Our 4-degree model had equal or better diagnostic performance compared with that of the radiologists (AUC = 0.936 and 0.872–0.967, p = 0.036–0.405). A DL model similar to our 4-degree model may lead to help radiologists in their diagnostic work in the future.


2021 ◽  
Vol 14 (2) ◽  
pp. 127-135
Author(s):  
Fadhil Yusuf Rahadika ◽  
Novanto Yudistira ◽  
Yuita Arum Sari

During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.


2021 ◽  
Vol 8 (4) ◽  
pp. 787
Author(s):  
Moechammad Sarosa ◽  
Nailul Muna

<p class="Abstrak">Bencana alam merupakan suatu peristiwa yang dapat menyebabkan kerusakan dan menciptakan kekacuan. Bangunan yang runtuh dapat menyebabkan cidera dan kematian pada korban. Lokasi dan waktu kejadian bencana alam yang tidak dapat diprediksi oleh manusia berpotensi memakan korban yang tidak sedikit. Oleh karena itu, untuk mengurangi korban yang banyak, setelah kejadian bencana alam, pertama yang harus dilakukan yaitu menemukan dan menyelamatkan korban yang terjebak. Penanganan evakuasi yang cepat harus dilakukan tim SAR untuk membantu korban. Namun pada kenyataannya, tim SAR mengalami kendala selama proses evakuasi korban. Mulai dari sulitnya medan yang dijangkau hingga terbatasnya peralatan yang dibutuhkan. Pada penelitian ini sistem diimplementasikan untuk deteksi korban bencana alam yang bertujuan untuk membantu mengembangkan peralatan tim SAR untuk menemukan korban bencana alam yang berbasis pengolahan citra. Algoritma yang digunakan untuk mendeteksi ada atau tidaknya korban pada gambar adalah <em>You Only Look Once</em> (YOLO). Terdapat dua macam algoritma YOLO yang diimplementasikan pada sistem yaitu YOLOv3 dan YOLOv3 Tiny. Dari hasil pengujian yang telah dilakukan didapatkan <em>F1 Score</em> mencapai 95.3% saat menggunakan YOLOv3 dengan menggunakan 100 data latih dan 100 data uji.</p><p class="Abstrak"> </p><p class="Abstrak"><strong><em>Abstract</em></strong></p><p class="Abstrak"> </p><p class="Abstract"><em>Natural disasters are events that can cause damage and create havoc. Buildings that collapse and can cause injury and death to victims. Humans can not predict the location and timing of natural disasters. After the natural disaster, the first thing to do is find and save trapped victims. The handling of rapid evacuation must be done by the SAR team to help victims to reduce the amount of loss due to natural disasters. But in reality, the process of evacuating victims of natural disasters is still a lot of obstacles experienced by the SAR team. It was starting from the difficulty of the terrain that is reached to the limited equipment needed. In this study, a natural disaster victim detection system was designed using image processing that aims to help find victims in difficult or vulnerable locations when directly reached by humans. In this study, a detection system for victims of natural disasters was implemented which aims to help develop equipment for the SAR team to find victims of natural disasters based on image processing. The algorithm used is You Only Look Once (YOLO). In this study, two types of YOLO algorithms were compared, namely YOLOv3 and YOLOv3 Tiny. From the test results that have been obtained, the F1 Score reaches 95.3% when using YOLOv3 with 100 training data and 100 test data.</em></p>


Repositor ◽  
2020 ◽  
Vol 2 (5) ◽  
pp. 675
Author(s):  
Muhammad Athaillah ◽  
Yufiz Azhar ◽  
Yuda Munarko

AbstrakKlasifiaksi berita hoaks merupakan salah satu aplikasi kategorisasi teks. Berita hoaks harus diklasifikasikan karena berita hoaks dapat mempengaruhi tindakan dan pola pikir pembaca. Dalam proses klasifikasi pada penelitian ini menggunakan beberapa tahapan yaitu praproses, ekstraksi fitur, seleksi fitur dan klasifikasi. Penelitian ini bertujuan membandingkan dua algoritma yaitu algoritma Naïve Bayes dan Multinomial Naïve Bayes, manakah dari kedua algoritma tersebut yang lebih efektif dalam mengklasifikasikan berita hoaks. Data yang digunakan dalam penelitian ini berasal dari www.trunbackhoax.id untuk data berita hoaks sebanyak 100 artikel dan data berita non-hoaks berasal dari kompas.com, detik.com berjumlah 100 artikel. Data latih berjumlah 140 artikel dan data uji berjumlah 60 artikel. Hasil perbandingan algoritma Naïve Bayes memiliki nilai F1-score sebesar 0,93 dan nilai F1-score Multinomial Naïve Bayes sebesar 0,92. Abstarct Classification hoax news is one of text categorizations applications. Hoax news must be classified because the hoax news can influence the reader actions and thinking patterns. Classification process in this reseacrh uses several stages, namely  preprocessing, features extraxtion, features selection and classification. This research to compare Naïve Bayes algorithm and Multinomial Naïve Bayes algorithm, which of the two algorithms is more effective on classifying hoax news. The data from this research  from  turnbackhoax.id as hoax news of 100 articles and non-hoax news from kompas.com, detik.com of 100 articles. Training data 140 articles dan test data 60 articles. The result of the comparison of algorithms  Naïve Bayes has an F1-score value of 0,93 and Naïve Bayes has an F1-score value of  0,92.


Sign in / Sign up

Export Citation Format

Share Document