bounding model
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 4)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
pp. 1-20
Author(s):  
V. Srilakshmi ◽  
K. Anuradha ◽  
C. Shoba Bindu

One of the effective text categorization methods for learning the large-scale data and the accumulated data is incremental learning. The major challenge in the incremental learning is improving the accuracy as the text document consists of numerous terms. In this research, a incremental text categorization method is developed using the proposed Spider Grasshopper Crow Optimization Algorithm based Deep Belief Neural network (SGrC-based DBN) for providing optimal text categorization results. The proposed text categorization method has four processes, such as are pre-processing, feature extraction, feature selection, text categorization, and incremental learning. Initially, the database is pre-processed and fed into vector space model for the extraction of features. Once the features are extracted, the feature selection is carried out based on mutual information. Then, the text categorization is performed using the proposed SGrC-based DBN method, which is developed by the integration of the spider monkey optimization (SMO) with the Grasshopper Crow Optimization Algorithm (GCOA) algorithm. Finally, the incremental text categorization is performed based on the hybrid weight bounding model that includes the SGrC and Range degree and particularly, the optimal weights of the Range degree model is selected based on SGrC. The experimental result of the proposed text categorization method is performed by considering the data from the Reuter database and 20 Newsgroups database. The comparative analysis of the text categorization method is based on the performance metrics, such as precision, recall and accuracy. The proposed SGrC algorithm obtained a maximum accuracy of 0.9626, maximum precision of 0.9681 and maximum recall of 0.9600, respectively when compared with the existing incremental text categorization methods.


2021 ◽  
Author(s):  
Wen-Yang Lin ◽  
Jie-Teng Wang

BACKGROUND Increasingly, spontaneous reporting systems (SRS) have been established to collect adverse drug events to foster the research of ADR detection and analysis. SRS data contains personal information and so its publication requires data anonymization to prevent the disclosure of individual privacy. We previously have proposed a privacy model called MS(k, θ*)-bounding and the associated MS-Anonymization algorithm to fulfill the anonymization of SRS data. In the real world, the SRS data usually are released periodically, e.g., FAERS, to accommodate newly collected adverse drug events. Different anonymized releases of SRS data available to the attacker may thwart our single-release-focus method, i.e., MS(k, θ*)-bounding. OBJECTIVE We investigate the privacy threat caused by periodical releases of SRS data and propose anonymization methods to prevent the disclosure of personal privacy information while maintain the utility of published data. METHODS We identify some potential attacks on periodical releases of SRS data, namely BFL-attacks, that are mainly caused by follow-up cases. We present a new privacy model called PPMS(k, θ*)-bounding, and propose the associated PPMS-Anonymization algorithm along with two improvements, PPMS+-Anonymization and PPMS++-Anonymization. Empirical evaluations were performed using 32 selected FAERS quarter datasets, from 2004Q1 to 2011Q4. The performance of the proposed three versions of PPMS-Anonymization were inspected against MS-Anonymization from some aspects, including data distortion, measured by Normalized Information Loss (NIS); privacy risk of anonymized data, measured by Dangerous Identity Ratio (DIR) and Dangerous Sensitivity Ratio (DSR); and data utility, measured by bias of signal counting and strength (PRR). RESULTS The results show that our new method can prevent privacy disclosure for periodical releases of SRS data with reasonable sacrifice of data utility and acceptable deviation of the strength of ADR signals. The best version of PPMS-Anonymization, PPMS++-Anonymization, achieves nearly the same quality as MS-Anonymization both in privacy protection and data utility. CONCLUSIONS The proposed PPMS(k, θ*)-bounding model and PPMS-Anonymization algorithm are effective in anonymizing SRS datasets in the periodical data publishing scenario, preventing the series of releases from the disclosure of personal sensitive information caused by BFL-attacks while maintaining the data utility for ADR signal detection.


2020 ◽  
Vol 16 (3) ◽  
pp. 347-368
Author(s):  
V. Srilakshmi ◽  
K. Anuradha ◽  
C. Shoba Bindu

Purpose This paper aims to model a technique that categorizes the texts from huge documents. The progression in internet technologies has raised the count of document accessibility, and thus the documents available online become countless. The text documents comprise of research article, journal papers, newspaper, technical reports and blogs. These large documents are useful and valuable for processing real-time applications. Also, these massive documents are used in several retrieval methods. Text classification plays a vital role in information retrieval technologies and is considered as an active field for processing massive applications. The aim of text classification is to categorize the large-sized documents into different categories on the basis of its contents. There exist numerous methods for performing text-related tasks such as profiling users, sentiment analysis and identification of spams, which is considered as a supervised learning issue and is addressed with text classifier. Design/methodology/approach At first, the input documents are pre-processed using the stop word removal and stemming technique such that the input is made effective and capable for feature extraction. In the feature extraction process, the features are extracted using the vector space model (VSM) and then, the feature selection is done for selecting the highly relevant features to perform text categorization. Once the features are selected, the text categorization is progressed using the deep belief network (DBN). The training of the DBN is performed using the proposed grasshopper crow optimization algorithm (GCOA) that is the integration of the grasshopper optimization algorithm (GOA) and Crow search algorithm (CSA). Moreover, the hybrid weight bounding model is devised using the proposed GCOA and range degree. Thus, the proposed GCOA + DBN is used for classifying the text documents. Findings The performance of the proposed technique is evaluated using accuracy, precision and recall is compared with existing techniques such as naive bayes, k-nearest neighbors, support vector machine and deep convolutional neural network (DCNN) and Stochastic Gradient-CAViaR + DCNN. Here, the proposed GCOA + DBN has improved performance with the values of 0.959, 0.959 and 0.96 for precision, recall and accuracy, respectively. Originality/value This paper proposes a technique that categorizes the texts from massive sized documents. From the findings, it can be shown that the proposed GCOA-based DBN effectively classifies the text documents.


Author(s):  
Olivier Bronchain ◽  
Julien M. Hendrickx ◽  
Clément Massart ◽  
Alex Olshevsky ◽  
François-Xavier Standaert

1999 ◽  
Vol 121 (4) ◽  
pp. 433-439 ◽  
Author(s):  
D. E. Cox ◽  
G. P. Gibbs ◽  
R. L. Clark ◽  
J. S. Vipperman

This work addresses the design and application of robust controllers for structural acoustic control. Both simulation and experimental results are presented. H∞ and μ-synthesis design methods were used to design feedback controllers which minimize power radiated from a panel while avoiding instability due to unmodeled dynamics. Specifically, high-order structural modes which couple strongly to the actuator-sensor path were poorly modeled. This model error was analytically bounded with an uncertainty model which allowed controllers to be designed without artificial limits on control effort. It is found that robust control methods provide the control designer with physically meaningful parameters with which to tune control designs and can be very useful in determining limits of performance. However, experimental results also showed poor robustness properties for control designs with ad-hoc uncertainty models. The importance of quantifying and bounding model errors is discussed.


Sign in / Sign up

Export Citation Format

Share Document