scholarly journals Analysis review on feature-based and word-rule based techniques in text steganography

2020 ◽  
Vol 9 (2) ◽  
pp. 764-770
Author(s):  
Farah Qasim Ahmed Alyousuf ◽  
Roshidi Din

This paper presents several techniques used in text steganography in term of feature-based and word-rule based. Additionally, it analyses the performance and the metric evaluation of the techniques used in text steganography. This paper aims to identify the main techniques of text steganography, which are feature-based, and word-rule based, to recognize the various techniques used with them. As a result, the primary technique used in the text steganography was feature-based technique due to its simplicity and secured. Meanwhile, the common parameter metrics utilized in text steganography were security, capacity, robustness, and embedding time. Future efforts are suggested to focus on the methods used in text steganography.

Author(s):  
Roshidi Din ◽  
Sunariya Utama ◽  
Aida Mustapha

Steganography is one of the categories in information hiding that is implemented to conceal the hidden message to ensure it cannot be recognized by human vision. This paper focuses on steganography implementation in text domain namely text steganography. Text steganography consists of two groups, which are word-rule based and feature-based techniques. This paper analysed these two categories of text steganography based on effectiveness and security evaluation because the effectiveness is critically important in order to determine that technique has the appropriate quality. Meanwhile, the security is important due to the intensity performance in securing the hidden message. The main goal of this paper is to review the evaluation of text steganography in terms of effectiveness and security that have been developed by previous research efforts. It is anticipated that this paper will identify the performance of text steganography based on effectiveness and security measurement.


2021 ◽  
Vol 297 ◽  
pp. 01072
Author(s):  
Rajae Bensoltane ◽  
Taher Zaki

Aspect category detection (ACD) is a task of aspect-based sentiment analysis (ABSA) that aims to identify the discussed category in a given review or sentence from a predefined list of categories. ABSA tasks were widely studied in English; however, studies in other low-resource languages such as Arabic are still limited. Moreover, most of the existing Arabic ABSA work is based on rule-based or feature-based machine learning models, which require a tedious task of feature-engineering and the use of external resources like lexicons. Therefore, the aim of this paper is to overcome these shortcomings by handling the ACD task using a deep learning method based on a bidirectional gated recurrent unit model. Additionally, we examine the impact of using different vector representation models on the performance of the proposed model. The experimental results show that our model outperforms the baseline and related work models significantly by achieving an enhanced F1-score of more than 7%.


2020 ◽  
pp. 1-28
Author(s):  
Tirthankar Ghosal ◽  
Vignesh Edithal ◽  
Asif Ekbal ◽  
Pushpak Bhattacharyya ◽  
Srinivasa Satya Sameer Kumar Chivukula ◽  
...  

Abstract Detecting, whether a document contains sufficient new information to be deemed as novel, is of immense significance in this age of data duplication. Existing techniques for document-level novelty detection mostly perform at the lexical level and are unable to address the semantic-level redundancy. These techniques usually rely on handcrafted features extracted from the documents in a rule-based or traditional feature-based machine learning setup. Here, we present an effective approach based on neural attention mechanism to detect document-level novelty without any manual feature engineering. We contend that the simple alignment of texts between the source and target document(s) could identify the state of novelty of a target document. Our deep neural architecture elicits inference knowledge from a large-scale natural language inference dataset, which proves crucial to the novelty detection task. Our approach is effective and outperforms the standard baselines and recent work on document-level novelty detection by a margin of $\sim$ 3% in terms of accuracy.


Legal Theory ◽  
2005 ◽  
Vol 11 (1) ◽  
pp. 1-26 ◽  
Author(s):  
Grant Lamond

The doctrine of precedent is one of the most distinctive features of the modern common law. Understanding the operation of precedent is important for our theorizing about the nature of law, since any adequate theory must be compatible with the practice. In this paper I will explore the conventional view of precedent endorsed by practitioners and many legal philosophers alike. I will argue that for all its attractions, it provides a distorted view of the nature of precedent. The distortion grows out of the basic assumption that precedents create rules, and thus that the common law can be understood as a form of rule-based decision-making. Instead, the common law is a form of case-by-case decision-making, and the doctrine of precedent constrains this decision-making by requiring later courts to treat earlier cases as correctly decided. The relevance of earlier cases is not well understood in terms of rules—they are better understood as a special type of reason.


2012 ◽  
Vol 459 ◽  
pp. 518-522
Author(s):  
Min Ma

A significant portion of the Chinese characters is phonogram, whose phonetic part can be used for overall sound inference. Phonetic degree is an inherent problem in the inference because low phonetic degree implies little phonetic dependence between the phonogram and its phonetic components. Solving the phonetic degree problem requires association each phonogram with the acoustic features. This paper introduces acoustic feature-based clustering, a classifying model that divides the common phonogram by defining new similarity of the sounds. This allows phonetic degree to be evaluated more reasonable. We demonstrate the clustering outperformed the traditional empirical estimation by having more accurate and real expressiveness. Acoustic feature-based clustering output 48.6% as phonetic degree, less than the empirical claim which is around 75%. As a clustering classifier, our model is competitive with a much clearer boundary on the phonogram dataset


2013 ◽  
Vol 75 (9) ◽  
pp. 664-669 ◽  
Author(s):  
Scott Woody ◽  
Ed Himelblau

We present a collection of analogies that are intended to help students better understand the foreign and often nuanced vocabulary of the genetics curriculum. Why is it called the “wild type”? What is the difference between a locus, a gene, and an allele? What is the functional (versus a rule-based) distinction between dominant and recessive alleles? It is our hope that by using these analogies, teachers at all levels of the K–16 curriculum can appeal to the common experience and common sense of their students, to lay a solid foundation for mastery of genetics and, thereby, to enhance understanding of evolutionary principles.


1996 ◽  
Vol 32 (2) ◽  
pp. 239-289 ◽  
Author(s):  
Akinbiyi Akinlabi

Underlying free (floating) features occur crosslinguistically. These features sometime function as morphemes. Such features, like segmental morphemes, often refer to specific edges of the stem, hence they are ‘featural affixes’. They get associated with the base to be prosodically licensed. We propose to account for the association of such features through a family of alignment constraints called ‘featural alignment’ which is a featural version of McCarthy & Prince's Align (MCat, MCat). Under featural alignment, an edge is defined for a feature based on a possible licensor, which may be a root node or a mora. We argue that misalignment takes place under pressure from feature co-occurrence constraints. Thus a featural suffix may get realized elsewhere in the stem, surfacing as a featural infix or even as a featural prefix. This constraints based approach is preferred to rule-based approaches since it does not require a variety of additional assumptions needed within rule-based approaches to account for the same phenomenon. These include structure preservation, prespecification, extrationality and filters.


2021 ◽  
Vol 8 (5) ◽  
pp. 805-812
Author(s):  
Mohammed Imran Basheer Ahmed ◽  
Atta-ur Rahman ◽  
Mehwash Farooqui ◽  
Fatimah Alamoudi ◽  
Raghad Baageel ◽  
...  

The undergoing research aims to address the problem of COVID-19 which has turned out to be a global pandemic. Despite developing some successful vaccines, the pace has not overcome so far. Several studies have been proposed in the literature in this regard, the present study is unique in terms of its dynamic nature to adapt the rules by reconfigurable fuzzy membership function. Based on patient’s symptoms (fever, dry cough etc.) and history related to travelling, diseases/medications and interactions with confirmed patients, the proposed dynamic fuzzy rule-based system (FRBS) identifies the presence/absence of the disease. This can greatly help the healthcare professionals as well as laymen in terms of disease identification. The main motivation of this paper is to reduce the pressure on the health services due to frequent test assessment requests, in which patients can do the test anytime without the need to make reservations. The main findings are that there is a relationship between the disease and the symptoms in which some symptoms can indicate the probability of the presence of the disease such as high difficulty of breathing, cough, sore throat, and so many more. By knowing the common symptoms, we developed membership functions for these symptoms, and a model generated to distinguish between infected and non-infected people with the help of survey data collected. The model gave an accuracy of 88.78%, precision of 72.22%, sensitivity of 68.42%, specificity of 93.67%, and an f1-score of 69.28%.


ARHE ◽  
2021 ◽  
Vol 27 (34) ◽  
pp. 61-83
Author(s):  
KATARINA MAKSIMOVIĆ

The goal of this paper is to introduce the reader to the distinction between intensional and extensional as a distinction between different approaches to meaning. We will argue that despite the common belief, intensional aspects of mathematical notions can be, and in fact have been successfully described in mathematics. One that is for us particularly interesting is the notion of deduction as depicted in general proof theory. Our considerations result in defending a) the importance of a rule-based semantical approach and b) the position according to which non-reductive and somewhat circular explanations play an essential role in describing intensionality in mathematics.


Sign in / Sign up

Export Citation Format

Share Document