decision table
Recently Published Documents


TOTAL DOCUMENTS

319
(FIVE YEARS 40)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Vol 5 (4) ◽  
pp. 395
Author(s):  
Muhammad Aqil Haqeemi Azmi ◽  
Cik Feresa Mohd Foozy ◽  
Khairul Amin Mohamad Sukri ◽  
Nurul Azma Abdullah ◽  
Isredza Rahmi A. Hamid ◽  
...  

Distributed Denial of Service (DDoS) attacks are dangerous attacks that can cause disruption to server, system or application layer. It will flood the target server with the amount of Internet traffic that the server could not afford at one time. Therefore, it is possible that the server will not work if it is affected by this DDoS attack. Due to this attack, the network security environment becomes insecure with the possibility of this attack. In recent years, the cases related to DDoS attacks have increased. Although previously there has been a lot of research on DDoS attacks, cases of DDoS attacks still exist. Therefore, the research on feature selection approach has been done in effort to detect the DDoS attacks by using machine learning techniques. In this paper, to detect DDoS attacks, features have been selected from the UNSW-NB 15 dataset by using Information Gain and Data Reduction method. To classify the selected features, ANN, Naïve Bayes, and Decision Table algorithms were used to test the dataset. To evaluate the result of the experiment, the parameters of Accuracy, Precision, True Positive and False Positive evaluated the results and classed the data into attacks and normal class. Hence, the good features have been obtained based on the experiments. To ensure the selected features are good or not, the results of classification have been compared with the past research that used the same UNSW-NB 15 dataset. To conclude, the accuracy of ANN, Naïve Bayes and Decision Table classifiers has been increased by using this feature selection approach compared to the past research.


2021 ◽  
Author(s):  
Vu Duc Thi ◽  
Nguyen Long Giang ◽  
Nguyen Ngoc Cuong ◽  
Pham Viet Anh

2021 ◽  
Author(s):  
Yingjie Zhu ◽  
Bin Yang

Abstract Hierarchical structured data are very common for data mining and other tasks in real-life world. How to select the optimal scale combination from a multi-scale decision table is critical for subsequent tasks. At present, the models for calculating the optimal scale combination mainly include lattice model, complement model and stepwise optimal scale selection model, which are mainly based on consistent multi-scale decision tables. The optimal scale selection model for inconsistent multi-scale decision tables has not been given. Based on this, firstly, this paper introduces the concept of complement and lattice model proposed by Li and Hu. Secondly, based on the concept of positive region consistency of inconsistent multi-scale decision tables, the paper proposes complement model and lattice model based on positive region consistent and gives the algorithm. Finally, some numerical experiments are employed to verify that the model has the same properties in processing inconsistent multi-scale decision tables as the complement model and lattice model in processing consistent multi-scale decision tables. And for the consistent multi-scale decision table, the same results can be obtained by using the model based on positive region consistent. However, the lattice model based on positive region consistent is more time-consuming and costly. The model proposed in this paper provides a new theoretical method for the optimal scale combination selection of the inconsistent multi-scale decision table.


2021 ◽  
Author(s):  
Ying Zeng ◽  
Yuan Chen ◽  
Zheming Yuan

Abstract BackgroundLysine succinylation is a type of protein post-translational modification which is widely involved in cell differentiation, cell metabolism and other important physiological activities. To study the molecular mechanism of succinylation in depth, succinylation sites need to be accurately identified, and because experimental approaches are costly and time-consuming, there is a great demand for reliable computational methods. Feature extraction is a key step in building succinylation site prediction models, and the development of effective new features improves predictive accuracy. Because the number of false succinylation sites far exceeds that of true sites, traditional classifiers perform poorly, and designing a classifier to effectively handle highly imbalanced datasets has always been a challenge.ResultsWe propose a new computational method, iSuc-ChiDT, to identify succinylation sites in proteins. In iSuc-ChiDT, chi-square statistical difference table encoding is developed to extract positional features, and has the highest predictive accuracy and fewest features compared to binary encoding and physicochemical property encoding. The chi-square decision table (ChiDT) classifier is designed to implement imbalanced pattern classification. With a training set of 4748:50,551(true: false sites), independent tests showed that ChiDT significantly outperformed traditional classifiers (including random forest, artificial neural network and relaxed variable kernel density estimator) in predictive accuracy and only taking 17s. Using an independent testing set of experimentally identified succinylation sites, iSuc-ChiDT achieved sensitivity of 70.47%, specificity of 66.27%, Matthews correlation coefficient of 0.205, and a global accuracy index Q9 of 0.683, showing a significant improvement in sensitivity and overall accuracy compared to PSuccE, Success, SuccinSite and other existing succinylation site predictors. ConclusionsiSuc-ChiDT shows great promise in predicting succinylation sites and is expected to facilitate further experimental investigation of protein succinylation.


2021 ◽  
pp. 115-129
Author(s):  
Paul C. Jorgensen ◽  
Byron DeVries
Keyword(s):  

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Biqing Wang

Abstract Attribute reduction is a key issue in the research of rough sets. Aiming at the shortcoming of attribute reduction algorithm based on discernibility matrix, an attribute reduction method based on sample extraction and priority is presented. Firstly, equivalence classes are divided using quick sort for computing compressed decision table. Secondly, important samples are extracted from compressed decision table using iterative self-organizing data analysis technique algorithm(ISODATA). Finally, attribute reduction of sample decision table is conducted based on the concept of priority. Experimental results show that the attribute reduction method based on sample extraction and priority can significantly reduce the overall execution time and improve the reduction efficiency.


2021 ◽  
Vol 3 (2) ◽  
pp. 137-144
Author(s):  
Joosten Joosten

Good software can be used if there is proper testing. The testing phase is quite important because the software needs to be tested before it is used by end users. In making software for animal hospitals there is no validation and verification so testing is needed. This study used information on the registration section of veterinary hospital patients and was tested by three Black Box Testing methods, namely Equivalence Class Partitioning (ECP), Boundary Value Analysis (BVA), and Decision Table plus the LOC approach. The test results of the three methods are that the percentage of invalid ECPs is greater than the valid ones, so the input value limit needs to be changed again. Then for BVA testing, the percentage of valid is higher than invalid. In the decision table, a shortening rule is made between operating services and other services so that it produces inpatient status and down payment automatically without choose it again and is tested again by the decision table by matching the estimation results of the two services.


2021 ◽  
Vol 179 (1) ◽  
pp. 75-92
Author(s):  
Yu-Ru Syau ◽  
Churn-Jung Liau ◽  
En-Bing Lin

We present variable precision generalized rough set approach to characterize incomplete decision tables. We show how to determine the discernibility threshold for a reflexive relational decision system in the variable precision generalized rough set model. We also point out some properties of positive regions and prove a statement of the necessary condition for weak consistency of an incomplete decision table. We present two examples to illustrate the results obtained in this paper.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 168
Author(s):  
Rashid Naseem ◽  
Zain Shaukat ◽  
Muhammad Irfan ◽  
Muhammad Arif Shah ◽  
Arshad Ahmad ◽  
...  

Software risk prediction is the most sensitive and crucial activity of Software Development Life Cycle (SDLC). It may lead to the success or failure of a project. The risk should be predicted earlier to make a software project successful. A model is proposed for the prediction of software requirement risks using requirement risk dataset and machine learning techniques. In addition, a comparison is made between multiple classifiers that are K-Nearest Neighbour (KNN), Average One Dependency Estimator (A1DE), Naïve Bayes (NB), Composite Hypercube on Iterated Random Projection (CHIRP), Decision Table (DT), Decision Table/Naïve Bayes Hybrid Classifier (DTNB), Credal Decision Trees (CDT), Cost-Sensitive Decision Forest (CS-Forest), J48 Decision Tree (J48), and Random Forest (RF) achieve the best suited technique for the model according to the nature of dataset. These techniques are evaluated using various evaluation metrics including CCI (correctly Classified Instances), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE), Root Relative Squared Error (RRSE), precision, recall, F-measure, Matthew’s Correlation Coefficient (MCC), Receiver Operating Characteristic Area (ROC area), Precision-Recall Curves area (PRC area), and accuracy. The inclusive outcome of this study shows that in terms of reducing error rates, CDT outperforms other techniques achieving 0.013 for MAE, 0.089 for RMSE, 4.498% for RAE, and 23.741% for RRSE. However, in terms of increasing accuracy, DT, DTNB, and CDT achieve better results.


Sign in / Sign up

Export Citation Format

Share Document