f measure
Recently Published Documents


TOTAL DOCUMENTS

639
(FIVE YEARS 427)

H-INDEX

17
(FIVE YEARS 7)

2022 ◽  
Vol 13 (2) ◽  
pp. 1-21
Author(s):  
Bo Sun ◽  
Takeshi Takahashi ◽  
Tao Ban ◽  
Daisuke Inoue

To relieve the burden of security analysts, Android malware detection and its family classification need to be automated. There are many previous works focusing on using machine (or deep) learning technology to tackle these two important issues, but as the number of mobile applications has increased in recent years, developing a scalable and precise solution is a new challenge that needs to be addressed in the security field. Accordingly, in this article, we propose a novel approach that not only enhances the performance of both Android malware and its family classification, but also reduces the running time of the analysis process. Using large-scale datasets obtained from different sources, we demonstrate that our method is able to output a high F-measure of 99.71% with a low FPR of 0.37%. Meanwhile, the computation time for processing a 300K dataset is reduced to nearly 3.3 hours. In addition, in classification evaluation, we demonstrate that the F-measure, precision, and recall are 97.5%, 96.55%, 98.64%, respectively, when classifying 28 malware families. Finally, we compare our method with previous studies in both detection and classification evaluation. We observe that our method produces better performance in terms of its effectiveness and efficiency.


2022 ◽  
Vol 31 (2) ◽  
pp. 1-37
Author(s):  
Jiachi Chen ◽  
Xin Xia ◽  
David Lo ◽  
John Grundy

The selfdestruct function is provided by Ethereum smart contracts to destroy a contract on the blockchain system. However, it is a double-edged sword for developers. On the one hand, using the selfdestruct function enables developers to remove smart contracts ( SCs ) from Ethereum and transfers Ethers when emergency situations happen, e.g., being attacked. On the other hand, this function can increase the complexity for the development and open an attack vector for attackers. To better understand the reasons why SC developers include or exclude the selfdestruct function in their contracts, we conducted an online survey to collect feedback from them and summarize the key reasons. Their feedback shows that 66.67% of the developers will deploy an updated contract to the Ethereum after destructing the old contract. According to this information, we propose a method to find the self-destructed contracts (also called predecessor contracts) and their updated version (successor contracts) by computing the code similarity. By analyzing the difference between the predecessor contracts and their successor contracts, we found five reasons that led to the death of the contracts; two of them (i.e., Unmatched ERC20 Token and Limits of Permission ) might affect the life span of contracts. We developed a tool named LifeScope to detect these problems. LifeScope reports 0 false positives or negatives in detecting Unmatched ERC20 Token . In terms of Limits of Permission , LifeScope achieves 77.89% of F-measure and 0.8673 of AUC in average. According to the feedback of developers who exclude selfdestruct functions, we propose suggestions to help developers use selfdestruct functions in Ethereum smart contracts better.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 634
Author(s):  
Yara Alghofaili ◽  
Murad A. Rassam

Recently, Internet of Things (IoT) technology has emerged in many aspects of life, such as transportation, healthcare, and even education. IoT technology incorporates several tasks to achieve the goals for which it was developed through smart services. These services are intelligent activities that allow devices to interact with the physical world to provide suitable services to users anytime and anywhere. However, the remarkable advancement of this technology has increased the number and the mechanisms of attacks. Attackers often take advantage of the IoTs’ heterogeneity to cause trust problems and manipulate the behavior to delude devices’ reliability and the service provided through it. Consequently, trust is one of the security challenges that threatens IoT smart services. Trust management techniques have been widely used to identify untrusted behavior and isolate untrusted objects over the past few years. However, these techniques still have many limitations like ineffectiveness when dealing with a large amount of data and continuously changing behaviors. Therefore, this paper proposes a model for trust management in IoT devices and services based on the simple multi-attribute rating technique (SMART) and long short-term memory (LSTM) algorithm. The SMART is used for calculating the trust value, while LSTM is used for identifying changes in the behavior based on the trust threshold. The effectiveness of the proposed model is evaluated using accuracy, loss rate, precision, recall, and F-measure on different data samples with different sizes. Comparisons with existing deep learning and machine learning models show superior performance with a different number of iterations. With 100 iterations, the proposed model achieved 99.87% and 99.76% of accuracy and F-measure, respectively.


Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 32
Author(s):  
Gang Sun ◽  
Hancheng Yu ◽  
Xiangtao Jiang ◽  
Mingkui Feng

Edge detection is one of the fundamental computer vision tasks. Recent methods for edge detection based on a convolutional neural network (CNN) typically employ the weighted cross-entropy loss. Their predicted results being thick and needing post-processing before calculating the optimal dataset scale (ODS) F-measure for evaluation. To achieve end-to-end training, we propose a non-maximum suppression layer (NMS) to obtain sharp boundaries without the need for post-processing. The ODS F-measure can be calculated based on these sharp boundaries. So, the ODS F-measure loss function is proposed to train the network. Besides, we propose an adaptive multi-level feature pyramid network (AFPN) to better fuse different levels of features. Furthermore, to enrich multi-scale features learned by AFPN, we introduce a pyramid context module (PCM) that includes dilated convolution to extract multi-scale features. Experimental results indicate that the proposed AFPN achieves state-of-the-art performance on the BSDS500 dataset (ODS F-score of 0.837) and the NYUDv2 dataset (ODS F-score of 0.780).


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261870
Author(s):  
Nozomi Eto ◽  
Junichi Yamazoe ◽  
Akiko Tsuji ◽  
Naohisa Wada ◽  
Noriaki Ikeda

Background Forensic dentistry identifies deceased individuals by comparing postmortem dental charts, oral-cavity pictures and dental X-ray images with antemortem records. However, conventional forensic dentistry methods are time-consuming and thus unable to rapidly identify large numbers of victims following a large-scale disaster. Objective Our goal is to automate the dental filing process by using intraoral scanner images. In this study, we generated and evaluated an artificial intelligence-based algorithm that classified images of individual molar teeth into three categories: (1) full metallic crown (FMC); (2) partial metallic restoration (In); or (3) sound tooth, carious tooth or non-metallic restoration (CNMR). Methods A pre-trained model was created using oral-cavity pictures from patients. Then, the algorithm was generated through transfer learning and training with images acquired from cadavers by intraoral scanning. Cross-validation was performed to reduce bias. The ability of the model to classify molar teeth into the three categories (FMC, In or CNMR) was evaluated using four criteria: precision, recall, F-measure and overall accuracy. Results The average value (variance) was 0.952 (0.000140) for recall, 0.957 (0.0000614) for precision, 0.952 (0.000145) for F-measure, and 0.952 (0.000142) for overall accuracy when the algorithm was used to classify images of molar teeth acquired from cadavers by intraoral scanning. Conclusion We have created an artificial intelligence-based algorithm that analyzes images acquired with an intraoral scanner and classifies molar teeth into one of three types (FMC, In or CNMR) based on the presence/absence of metallic restorations. Furthermore, the accuracy of the algorithm reached about 95%. This algorithm was constructed as a first step toward the development of an automated system that generates dental charts from images acquired by an intraoral scanner. The availability of such a system would greatly increase the efficiency of personal identification in the event of a major disaster.


2022 ◽  
pp. 987-1003
Author(s):  
H. T. Basavaraju ◽  
V.N. Manjunath Aradhya ◽  
D. S. Guru ◽  
H. B. S. Harish

Text in an image or a video affords more precise meaning and text is a prominent source with a clear explanation of the content than any other high-level or low-level features. The text detection process is a still challenging research work in the field of computer vision. However, complex background and orientation of the text leads to extremely stimulating text detection tasks. Multilingual text consists of different geometrical shapes than a single language. In this article, a simple and yet effective approach is presented to detect the text from an arbitrary oriented multilingual image and video. The proposed method employs the Laplacian of Gaussian to identify the potential text information. The double line structure analysis is applied to extract the true text candidates. The proposed method is evaluated on five datasets: Hua's, arbitrarily oriented, multi-script robust reading competition (MRRC), MSRA and video datasets with performance measures precision, recall and f-measure. The proposed method is also tested on real-time video, and the result is promising and encouraging.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 77
Author(s):  
Seongju Kang ◽  
Jaegi Hwang ◽  
Kwangsue Chung

Object detection is a significant activity in computer vision, and various approaches have been proposed to detect varied objects using deep neural networks (DNNs). However, because DNNs are computation-intensive, it is difficult to apply them to resource-constrained devices. Here, we propose an on-device object detection method using domain-specific models. In the proposed method, we define object of interest (OOI) groups that contain objects with a high frequency of appearance in specific domains. Compared with the existing DNN model, the layers of the domain-specific models are shallower and narrower, reducing the number of trainable parameters; thus, speeding up the object detection. To ensure a lightweight network design, we combine various network structures to obtain the best-performing lightweight detection model. The experimental results reveal that the size of the proposed lightweight model is 21.7 MB, which is 91.35% and 36.98% smaller than those of YOLOv3-SPP and Tiny-YOLO, respectively. The f-measure achieved on the MS COCO 2017 dataset were 18.3%, 11.9% and 20.3% higher than those of YOLOv3-SPP, Tiny-YOLO and YOLO-Nano, respectively. The results demonstrated that the lightweight model achieved higher efficiency and better performance on non-GPU devices, such as mobile devices and embedded boards, than conventional models.


2022 ◽  
Vol 11 (1) ◽  
pp. 0-0

Recommender Systems aim to automatically provide users with personalized information in an overloaded search space. To dual with vagueness and imprecision problems in RS, several researches have been proposed fuzzy based approaches. Even though, these works have incorporated experimental evaluation; they were used in different recommendation scenarios which makes it difficult to have a fair comparison between them. Also, some of them performed an items and/or users clustering before generating recommendations. For this reason they need additional information such as item attributes or trust between users which are not always available. In this paper, we propose to use fuzzy set techniques to predict the rating of a target user for each unrated item. It uses the target user's history in addition with rating of similar users which allows to the target user to contribute in the recommendation process. Experimental results on several datasets seem to be promising in term of MAE (Mean Average Error), RMSE (Root Mean Square Error), accuracy, precision, recall and F-measure.


2021 ◽  
Vol 19 (6) ◽  
pp. 584-602
Author(s):  
Lucian Jose Gonçales ◽  
Kleinner Farias ◽  
Lucas Kupssinskü ◽  
Matheus Segalotto

EEG signals are a relevant indicator for measuring aspects related to human factors in Software Engineering. EEG is used in software engineering to train machine learning techniques for a wide range of applications, including classifying task difficulty, and developers’ level of experience. The EEG signal contains noise such as abnormal readings, electrical interference, and eye movements, which are usually not of interest to the analysis, and therefore contribute to the lack of precision of the machine learning techniques. However, research in software engineering has not evidenced the effectiveness when applying these filters on EEG signals. The objective of this work is to analyze the effectiveness of filters on EEG signals in the software engineering context. As literature did not focus on the classification of developers’ code comprehension, this study focuses on the analysis of the effectiveness of applying EEG filters for training a machine learning technique to classify developers' code comprehension. A Random Forest (RF) machine learning technique was trained with filtered EEG signals to classify the developers' code comprehension. This study also trained another random forest classifier with unfiltered EEG data. Both models were trained using 10-fold cross-validation. This work measures the classifiers' effectiveness using the f-measure metric. This work used the t-test, Wilcoxon, and U Mann Whitney to analyze the difference in the effectiveness measures (f-measure) between the classifier trained with filtered EEG and the classifier trained with unfiltered EEG. The tests pointed out that there is a significant difference after applying EEG filters to classify developers' code comprehension with the random forest classifier. The conclusion is that the use of EEG filters significantly improves the effectivity to classify code comprehension using the random forest technique.


2021 ◽  
Author(s):  
Aleksandar Kovačević ◽  
Jelena Slivka ◽  
Dragan Vidaković ◽  
Katarina-Glorija Grujić ◽  
Nikola Luburić ◽  
...  

<p>Code smells are structures in code that often have a negative impact on its quality. Manually detecting code smells is challenging and researchers proposed many automatic code smell detectors. Most of the studies propose detectors based on code metrics and heuristics. However, these studies have several limitations, including evaluating the detectors using small-scale case studies and an inconsistent experimental setting. Furthermore, heuristic-based detectors suffer from limitations that hinder their adoption in practice. Thus, researchers have recently started experimenting with machine learning (ML) based code smell detection. </p><p>This paper compares the performance of multiple ML-based code smell detection models against multiple traditionally employed metric-based heuristics for detection of God Class and Long Method code smells. We evaluate the effectiveness of different source code representations for machine learning: traditionally used code metrics and code embeddings (code2vec, code2seq, and CuBERT).<br></p><p>We perform our experiments on the large-scale, manually labeled MLCQ dataset. We consider the binary classification problem – we classify the code samples as smelly or non-smelly and use the F1-measure of the minority (smell) class as a measure of performance. In our experiments, the ML classifier trained using CuBERT source code embeddings achieved the best performance for both God Class (F-measure of 0.53) and Long Method detection (F-measure of 0.75). With the help of a domain expert, we perform the error analysis to discuss the advantages of the CuBERT approach.<br></p><p>This study is the first to evaluate the effectiveness of pre-trained neural source code embeddings for code smell detection to the best of our knowledge. A secondary contribution of our study is the systematic evaluation of the effectiveness of multiple heuristic-based approaches on the same large-scale, manually labeled MLCQ dataset.<br></p>


Sign in / Sign up

Export Citation Format

Share Document