scholarly journals Evaluation Review on Effectiveness and Security Performances of Text Steganography Technique

Author(s):  
Roshidi Din ◽  
Sunariya Utama ◽  
Aida Mustapha

Steganography is one of the categories in information hiding that is implemented to conceal the hidden message to ensure it cannot be recognized by human vision. This paper focuses on steganography implementation in text domain namely text steganography. Text steganography consists of two groups, which are word-rule based and feature-based techniques. This paper analysed these two categories of text steganography based on effectiveness and security evaluation because the effectiveness is critically important in order to determine that technique has the appropriate quality. Meanwhile, the security is important due to the intensity performance in securing the hidden message. The main goal of this paper is to review the evaluation of text steganography in terms of effectiveness and security that have been developed by previous research efforts. It is anticipated that this paper will identify the performance of text steganography based on effectiveness and security measurement.

2020 ◽  
Vol 9 (2) ◽  
pp. 764-770
Author(s):  
Farah Qasim Ahmed Alyousuf ◽  
Roshidi Din

This paper presents several techniques used in text steganography in term of feature-based and word-rule based. Additionally, it analyses the performance and the metric evaluation of the techniques used in text steganography. This paper aims to identify the main techniques of text steganography, which are feature-based, and word-rule based, to recognize the various techniques used with them. As a result, the primary technique used in the text steganography was feature-based technique due to its simplicity and secured. Meanwhile, the common parameter metrics utilized in text steganography were security, capacity, robustness, and embedding time. Future efforts are suggested to focus on the methods used in text steganography.


2021 ◽  
Vol 297 ◽  
pp. 01072
Author(s):  
Rajae Bensoltane ◽  
Taher Zaki

Aspect category detection (ACD) is a task of aspect-based sentiment analysis (ABSA) that aims to identify the discussed category in a given review or sentence from a predefined list of categories. ABSA tasks were widely studied in English; however, studies in other low-resource languages such as Arabic are still limited. Moreover, most of the existing Arabic ABSA work is based on rule-based or feature-based machine learning models, which require a tedious task of feature-engineering and the use of external resources like lexicons. Therefore, the aim of this paper is to overcome these shortcomings by handling the ACD task using a deep learning method based on a bidirectional gated recurrent unit model. Additionally, we examine the impact of using different vector representation models on the performance of the proposed model. The experimental results show that our model outperforms the baseline and related work models significantly by achieving an enhanced F1-score of more than 7%.


Fast track article for IS&T International Symposium on Electronic Imaging 2021: Human Vision and Electronic Imaging proceedings.


2020 ◽  
pp. 1-28
Author(s):  
Tirthankar Ghosal ◽  
Vignesh Edithal ◽  
Asif Ekbal ◽  
Pushpak Bhattacharyya ◽  
Srinivasa Satya Sameer Kumar Chivukula ◽  
...  

Abstract Detecting, whether a document contains sufficient new information to be deemed as novel, is of immense significance in this age of data duplication. Existing techniques for document-level novelty detection mostly perform at the lexical level and are unable to address the semantic-level redundancy. These techniques usually rely on handcrafted features extracted from the documents in a rule-based or traditional feature-based machine learning setup. Here, we present an effective approach based on neural attention mechanism to detect document-level novelty without any manual feature engineering. We contend that the simple alignment of texts between the source and target document(s) could identify the state of novelty of a target document. Our deep neural architecture elicits inference knowledge from a large-scale natural language inference dataset, which proves crucial to the novelty detection task. Our approach is effective and outperforms the standard baselines and recent work on document-level novelty detection by a margin of $\sim$ 3% in terms of accuracy.


2010 ◽  
Vol 2 (2) ◽  
pp. 68-77 ◽  
Author(s):  
Xinbo Gao ◽  
Cheng Deng ◽  
Xuelong Li ◽  
Dacheng Tao

Author(s):  
Muhammad Qasim Memon ◽  
Haiyang Yu ◽  
Khurram Gulzar Rana ◽  
Muhammad Azeem ◽  
Cai Yongquan ◽  
...  

1996 ◽  
Vol 32 (2) ◽  
pp. 239-289 ◽  
Author(s):  
Akinbiyi Akinlabi

Underlying free (floating) features occur crosslinguistically. These features sometime function as morphemes. Such features, like segmental morphemes, often refer to specific edges of the stem, hence they are ‘featural affixes’. They get associated with the base to be prosodically licensed. We propose to account for the association of such features through a family of alignment constraints called ‘featural alignment’ which is a featural version of McCarthy & Prince's Align (MCat, MCat). Under featural alignment, an edge is defined for a feature based on a possible licensor, which may be a root node or a mora. We argue that misalignment takes place under pressure from feature co-occurrence constraints. Thus a featural suffix may get realized elsewhere in the stem, surfacing as a featural infix or even as a featural prefix. This constraints based approach is preferred to rule-based approaches since it does not require a variety of additional assumptions needed within rule-based approaches to account for the same phenomenon. These include structure preservation, prespecification, extrationality and filters.


2020 ◽  
Author(s):  
Huiping Shi ◽  
Xiaobing Zhou

Abstract Background: The cause of the disease is one of the main contents of biomedical research. Extracting effective relational information from a large number of biomedical texts has important applications for biomedical research. At present, most of the work of biomedicine is to use manual screening or use rule-based or feature-based pipeline network models to obtain screening characteristics. These methods require a lot of time to design specific rules or features to complete specific tasks, resulting in some features that non-compliant features cannot be filtered out.Results: The model gets micro-F1 scores of 0.802 and 0.876 on the Chemprot data set and DDI data set, respectively. The resources that can be used in this project can be found in https://github.com/HunterHeidy/DDICPI-.Conclusions: Experiments have proved that without Bert, you can get good results by learning from Bert<s core ideas.


Sign in / Sign up

Export Citation Format

Share Document