2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Rhorom Priyatikanto ◽  
Lidia Mayangsari ◽  
Rudi A. Prihandoko ◽  
Agustinus G. Admiranto

Sky brightness measuring and monitoring are required to mitigate the negative effect of light pollution as a byproduct of modern civilization. Good handling of a pile of sky brightness data includes evaluation and classification of the data according to its quality and characteristics such that further analysis and inference can be conducted properly. This study aims to develop a classification model based on Random Forest algorithm and to evaluate its performance. Using sky brightness data from 1250 nights with minute temporal resolution acquired at eight different stations in Indonesia, datasets consisting of 15 features were created to train and test the model. Those features were extracted from the observation time, the global statistics of nightly sky brightness, or the light curve characteristics. Among those features, 10 are considered to be the most important for the classification task. The model was trained to classify the data into six classes (1: peculiar data, 2: overcast, 3: cloudy, 4: clear, 5: moonlit-cloudy, and 6: moonlit-clear) and then tested to achieve high accuracy (92%) and scores (F-score = 84% and G-mean = 84%). Some misclassifications exist, but the classification results are considerably good as indicated by posterior distributions of the sky brightness as a function of classes. Data classified as class-4 have sharp distribution with typical full width at half maximum of 1.5 mag/arcsec2, while distributions of class-2 and -3 are left skewed with the latter having lighter tail. Due to the moonlight, distributions of class-5 and -6 data are more smeared or have larger spread. These results demonstrate that the established classification model is reasonably good and consistent.


PLoS ONE ◽  
2018 ◽  
Vol 13 (6) ◽  
pp. e0198281 ◽  
Author(s):  
Md Akter Hussain ◽  
Alauddin Bhuiyan ◽  
Chi D. Luu ◽  
R. Theodore Smith ◽  
Robyn H. Guymer ◽  
...  

2009 ◽  
Vol 9 (2) ◽  
pp. 220-226 ◽  
Author(s):  
Dan Gao ◽  
Yan-Xia Zhang ◽  
Yong-Heng Zhao

Author(s):  
Harits Ar Rosyid ◽  
Utomo Pujianto ◽  
Moch Rajendra Yudhistira

There are various ways to improve the quality of someone's education, one of them is reading. By reading, insight and knowledge of various kinds of things can increase. But, the ability and someone's understanding of reading is different. This can be a problem for readers if the reading material exceeds his comprehension ability. Therefore, it is necessary to determine the load of reading material using Lexile Levels. Lexile Levels are a value that gives a size the complexity of reading material and someone's reading ability. Thus, the reading material will be classified based a value on the Lexile Levels. Lexile Levels will cluster the reading material into 2 clusters which is easy, and difficult. The clustering process will use the k-means method. After the clustering process, reading material will be classified using the reading load Random Forest method. The k-means method was chosen because of the method has a simple computing process and fast also. Random Forest algorithm is a method that can build decision tree and it’s able to build several decision trees then choose the best tree. The results of this experiment indicate that the experiment scenario uses 2 cluster and SMOTE and GIFS preprocessing are carried out shows good results with an accuracy of 76.03%, precision of 81.85% and recall of 76.05%.


2019 ◽  
Author(s):  
Demetrius DiMucci ◽  
Mark Kon ◽  
Daniel Segrè

AbstractMachine learning is helping the interpretation of biological complexity by enabling the inference and classification of cellular, organismal and ecological phenotypes based on large datasets, e.g. from genomic, transcriptomic and metagenomic analyses. A number of available algorithms can help search these datasets to uncover patterns associated with specific traits, including disease-related attributes. While, in many instances, treating an algorithm as a black box is sufficient, it is interesting to pursue an enhanced understanding of how system variables end up contributing to a specific output, as an avenue towards new mechanistic insight. Here we address this challenge through a suite of algorithms, named BowSaw, which takes advantage of the structure of a trained random forest algorithm to identify combinations of variables (“rules”) frequently used for classification. We first apply BowSaw to a simulated dataset, and show that the algorithm can accurately recover the sets of variables used to generate the phenotypes through complex Boolean rules, even under challenging noise levels. We next apply our method to data from the integrative Human Microbiome Project and find previously unreported high-order combinations of microbial taxa putatively associated with Crohn’s disease. By leveraging the structure of trees within a random forest, BowSaw provides a new way of using decision trees to generate testable biological hypotheses.


Sign in / Sign up

Export Citation Format

Share Document