Threats Classification

Author(s):  
Mouna Jouini ◽  
Latifa Ben Arfa Rabai

Information systems are frequently exposed to various types of threats which can cause different types of damages that might lead to significant financial losses. Information security damages can range from small losses to entire information system destruction. The effects of various threats vary considerably: some affect the confidentiality or integrity of data while others affect the availability of a system. Currently, organizations are struggling to understand what the threats to their information assets are and how to obtain the necessary means to combat them which continues to pose a challenge. To improve our understanding of security threats, we propose a security threat classification model which allows us to study the threats class impact instead of a threat impact as a threat varies over time. This chapter deals with the threats classification problem and its motivation. It addresses different criteria of information system security risks classification and gives a review of most threats classification models. We present as well recent surveys on security breaches costs.

Author(s):  
Mouna Jouini ◽  
Latifa Ben Arfa Rabai

Information systems are frequently exposed to various types of threats which can cause different types of damages that might lead to significant financial losses. Information security damages can range from small losses to entire information system destruction. The effects of various threats vary considerably: some affect the confidentiality or integrity of data while others affect the availability of a system. Currently, organizations are struggling to understand what the threats to their information assets are and how to obtain the necessary means to combat them which continues to pose a challenge. To improve our understanding of security threats, we propose a security threat classification model which allows us to study the threats class impact instead of a threat impact as a threat varies over time. This chapter deals with the threats classification problem and its motivation. It addresses different criteria of information system security risks classification and gives a review of most threats classification models. We present as well recent surveys on security breaches costs.


Author(s):  
Tosin Daniel Oyetoyan ◽  
Patrick Morrison

AbstractSecurity remains under-addressed in many organisations, illustrated by the number of large-scale software security breaches. Preventing breaches can begin during software development if attention is paid to security during the software’s design and implementation. One approach to security assurance during software development is to examine communications between developers as a means of studying the security concerns of the project. Prior research has investigated models for classifying project communication messages (e.g., issues or commits) as security related or not. A known problem is that these models are project-specific, limiting their use by other projects or organisations. We investigate whether we can build a generic classification model that can generalise across projects. We define a set of security keywords by extracting them from relevant security sources, dividing them into four categories: asset, attack/threat, control/mitigation, and implicit. Using different combinations of these categories and including them in the training dataset, we built a classification model and evaluated it on industrial, open-source, and research-based datasets containing over 45 different products. Our model based on harvested security keywords as a feature set shows average recall from 55 to 86%, minimum recall from 43 to 71% and maximum recall from 60 to 100%. An average f-score between 3.4 and 88%, an average g-measure of at least 66% across all the dataset, and an average AUC of ROC from 69 to 89%. In addition, models that use externally sourced features outperformed models that use project-specific features on average by a margin of 26–44% in recall, 22–50% in g-measure, 0.4–28% in f-score, and 15–19% in AUC of ROC. Further, our results outperform a state-of-the-art prediction model for security bug reports in all cases. We find using sound statistical and effect size tests that (1) using harvested security keywords as features to train a text classification model improve classification models and generalise to other projects significantly. (2) Including features in the training dataset before model construction improve classification models significantly. (3) Different security categories represent predictors for different projects. Finally, we introduce new and promising approaches to construct models that can generalise across different independent projects.


2021 ◽  
Vol 13 (11) ◽  
pp. 6376
Author(s):  
Junseo Bae ◽  
Sang-Guk Yum ◽  
Ji-Myong Kim

Given the highly visible nature, transportation infrastructure construction projects are often exposed to numerous unexpected events, compared to other types of construction projects. Despite the importance of predicting financial losses caused by risk, it is still difficult to determine which risk factors are generally critical and when these risks tend to occur, without benchmarkable references. Most of existing methods are prediction-focused, project type-specific, while ignoring the timing aspect of risk. This study filled these knowledge gaps by developing a neural network-driven machine-learning classification model that can categorize causes of financial losses depending on insurance claim payout proportions and risk occurrence timing, drawing on 625 transportation infrastructure construction projects including bridges, roads, and tunnels. The developed network model showed acceptable classification accuracy of 74.1%, 69.4%, and 71.8% in training, cross-validation, and test sets, respectively. This study is the first of its kind by providing benchmarkable classification references of economic damage trends in transportation infrastructure projects. The proposed holistic approach will help construction practitioners consider the uncertainty of project management and the potential impact of natural hazards proactively, with the risk occurrence timing trends. This study will also assist insurance companies with developing sustainable financial management plans for transportation infrastructure projects.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1714
Author(s):  
Mohamed Marey ◽  
Hala Mostafa

In this work, we propose a general framework to design a signal classification algorithm over time selective channels for wireless communications applications. We derive an upper bound on the maximum number of observation samples over which the channel response is an essential invariant. The proposed framework relies on dividing the received signal into blocks, and each of them has a length less than the mentioned bound. Then, these blocks are fed into a number of classifiers in a parallel fashion. A final decision is made through a well-designed combiner and detector. As a case study, we employ the proposed framework on a space-time block-code classification problem by developing two combiners and detectors. Monte Carlo simulations show that the proposed framework is capable of achieving excellent classification performance over time selective channels compared to the conventional algorithms.


Author(s):  
Ritam Guha ◽  
Manosij Ghosh ◽  
Pawan Kumar Singh ◽  
Ram Sarkar ◽  
Mita Nasipuri

AbstractIn any multi-script environment, handwritten script classification is an unavoidable pre-requisite before the document images are fed to their respective Optical Character Recognition (OCR) engines. Over the years, this complex pattern classification problem has been solved by researchers proposing various feature vectors mostly having large dimensions, thereby increasing the computation complexity of the whole classification model. Feature Selection (FS) can serve as an intermediate step to reduce the size of the feature vectors by restricting them only to the essential and relevant features. In the present work, we have addressed this issue by introducing a new FS algorithm, called Hybrid Swarm and Gravitation-based FS (HSGFS). This algorithm has been applied over three feature vectors introduced in the literature recently—Distance-Hough Transform (DHT), Histogram of Oriented Gradients (HOG), and Modified log-Gabor (MLG) filter Transform. Three state-of-the-art classifiers, namely, Multi-Layer Perceptron (MLP), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM), are used to evaluate the optimal subset of features generated by the proposed FS model. Handwritten datasets at block, text line, and word level, consisting of officially recognized 12 Indic scripts, are prepared for experimentation. An average improvement in the range of 2–5% is achieved in the classification accuracy by utilizing only about 75–80% of the original feature vectors on all three datasets. The proposed method also shows better performance when compared to some popularly used FS models. The codes used for implementing HSGFS can be found in the following Github link: https://github.com/Ritam-Guha/HSGFS.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


2014 ◽  
Vol 631-632 ◽  
pp. 49-52
Author(s):  
Yan Li ◽  
Jia Jia Hou ◽  
Xiao Qing Liu

Variable precision rough set (VPRS) based on dominance relation is an extension of traditional rough set by which can handle preference-ordered information flexibly. This paper focuses on the maintenance of approximations in dominance based VPRS when the objects in an information system vary over time. The incremental updating principles are given as inserting or deleting an object, and some experimental evaluations validates the effectiveness of the proposed method.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Siti Nurjanah Ramadhany ◽  
Ade Eviyanti

Technology is increasingly sophisticated with over time competition in the business world such as E-Commerce has a positive impact on entrepreneurs to advance their companies, namely by creating online websites so that offerings and sales are easy among the public. By accessing the website page of PT. Daya Berkah Sentosa Nusantara buyers do not have to come directly to the place or company, and send offers according to admin needs.The purpose of this study is to make it easier for researchers to solve supply and sales problems based on problems that arise in the company. The method used in this study is the Waterfall Method, with data collection techniques used using observation, interviews and literature study. The desired result of this research is to be able to create a website for the company, to be able to expand marketing reach, buyers can view products through the website.


Author(s):  
Mikko T. Siponen

The question of whether ethical theories appealing to human morality can serve as a means of protection against information system security breaches has been recognized by several authors. The existing views concerning the role of ethics in information systems security can be divided into two categories. These are (1) expressions about the use of human morality and (2) arguments claiming that the use of ethics is useless or, at best, extremely restricted. However, the former views are general statements lacking concrete guidance and the latter viewpoint is based on cultural relativism, and can be thus classified as descriptivism. This paper claims that the use of ethical theories and human morality is useful for security, particularly given that Hare’s Overriding thesis has validity — though it has its limitations, too. This paper further argues that descriptivism (including the doctrine of cultural relativism) leads to several problems, contradictions and causes detrimental effects to our well-being (and security). Therefore, an alternative approach to using ethics in minimizing security breaches that is based on non-descriptive theories is proposed. The use of non-descriptivism will be demonstrated using Rawls’ concept of the “veil of ignorance.” The limitations of non-descriptivism, and appealing to human morality in a general sense, will also be discussed. Finally, suggestions for future research directions are outlined.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1664 ◽  
Author(s):  
Haiping Huang ◽  
Linkang Hu ◽  
Fu Xiao ◽  
Anming Du ◽  
Ning Ye ◽  
...  

With the continuous increment of security risks and the limitations of traditional modes, it is necessary to design a universal and trustworthy identity authentication system for intelligent Internet of Things (IoT) applications such as an intelligent entrance guard. The characteristics of EEG (electroencephalography) have gained the confidence of researchers due to its uniqueness, stability, and universality. However, the limited usability of the experimental paradigm and the unsatisfactory classification accuracy have so far prevented the identity authentication system based on EEG to become commonplace in IoT scenarios. To address these problems, an audiovisual presentation paradigm is proposed to record the EEG signals of subjects. In the pre-processing stage, the reference electrode, ensemble averaging, and independent component analysis methods are used to remove artifacts. In the feature extraction stage, adaptive feature selection and bagging ensemble learning algorithms establish the optimal classification model. The experimental result shows that our proposal achieves the best classification accuracy when compared with other paradigms and typical EEG-based authentication methods, and the test evaluation on a login scenario is designed to further demonstrate that the proposed system is feasible, effective, and reliable.


Sign in / Sign up

Export Citation Format

Share Document