K-Anonymization-Based Temporal Attack Risk Detection Using Machine Learning Paradigms

Author(s):  
P. Geetha ◽  
Chandrakant Naikodi ◽  
L. Suresh

Huge amount of personal data is collected by online applications and its protection based on privacy has brought a lot of major challenging issues. Hence, the [Formula: see text]-anonymization with privacy-preserving data publishing has emerged as an active research field. The published data contains personalized information, which may be used for analysis converting it to useful information. In this paper, Quasi identifier (QI) data publishing with data preservation through the [Formula: see text]-anonymization process is proposed. Moreover, the risks such as the temporal attack in the previous release of re-identifying QI information are evaluated using the [Formula: see text]-anonymity model. The development of independent and ensemble classifiers for finding efficient QI’s to avoid temporal attacks is the major objective of this paper. Therefore, the classifiers like Naïve Bayes, Support Vector Machine, and Multilayer Perceptron are used as base classifiers. An ensemble model based on these base classifiers is also used. The experimental results demonstrate that, the proposed classification approach is an effective K-anonymity tool for the enhancement of sequential release.

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaoqiang Li ◽  
Yi Zhang ◽  
Dong Liao

Human action recognition based on 3D skeleton has become an active research field in recent years with the recently developed commodity depth sensors. Most published methods analyze an entire 3D depth data, construct mid-level part representations, or use trajectory descriptor of spatial-temporal interest point for recognizing human activities. Unlike previous work, a novel and simple action representation is proposed in this paper which models the action as a sequence of inconsecutive and discriminative skeleton poses, named as key skeleton poses. The pairwise relative positions of skeleton joints are used as feature of the skeleton poses which are mined with the aid of the latent support vector machine (latent SVM). The advantage of our method is resisting against intraclass variation such as noise and large nonlinear temporal deformation of human action. We evaluate the proposed approach on three benchmark action datasets captured by Kinect devices: MSR Action 3D dataset, UTKinect Action dataset, and Florence 3D Action dataset. The detailed experimental results demonstrate that the proposed approach achieves superior performance to the state-of-the-art skeleton-based action recognition methods.


2014 ◽  
Vol Volume 17 - 2014 - Special... ◽  
Author(s):  
Yahya Slimani ◽  
Mohamed Amir Essegir ◽  
Mouhamadou Lamine Samb ◽  
Fodé Camara ◽  
Samba Ndiaye

International audience The feature selection for classification is a very active research field in data mining and optimization. Its combinatorial nature requires the development of specific techniques (such as filters, wrappers, genetic algorithms, and so on) or hybrid approaches combining several optimization methods. In this context, the support vector machine recursive feature elimination (SVM-RFE), is distinguished as one of the most effective methods. However, the RFE-SVM algorithm is a greedy method that only hopes to find the best possible combination for classification. To overcome this limitation, we propose an alternative approach with the aim to combine the RFE-SVM algorithm with local search operators based on operational research and artificial intelligence.


Molecules ◽  
2021 ◽  
Vol 26 (11) ◽  
pp. 3192
Author(s):  
Nicolas Giacoletto ◽  
Frédéric Dumur

Over the past several decades, photopolymerization has become an active research field, and the ongoing efforts to develop new photoinitiating systems are supported by the different applications in which this polymerization technique is involved—including dentistry, 3D and 4D printing, adhesives, and laser writing. In the search for new structures, bis-chalcones that combine two chalcones’ moieties within a unique structure were determined as being promising photosensitizers to initiate both the free-radical polymerization of acrylates and the cationic polymerization of epoxides. In this review, an overview of the different bis-chalcones reported to date is provided. Parallel to the mechanistic investigations aiming at elucidating the polymerization mechanisms, bis-chalcones-based photoinitiating systems were used for different applications, which are detailed in this review.


2021 ◽  
Vol 11 (12) ◽  
pp. 3164-3173
Author(s):  
R. Indhumathi ◽  
S. Sathiya Devi

Data sharing is essential in present biomedical research. A large quantity of medical information is gathered and for different objectives of analysis and study. Because of its large collection, anonymity is essential. Thus, it is quite important to preserve privacy and prevent leakage of sensitive information of patients. Most of the Anonymization methods such as generalisation, suppression and perturbation are proposed to overcome the information leak which degrades the utility of the collected data. During data sanitization, the utility is automatically diminished. Privacy Preserving Data Publishing faces the main drawback of maintaining tradeoff between privacy and data utility. To address this issue, an efficient algorithm called Anonymization based on Improved Bucketization (AIB) is proposed, which increases the utility of published data while maintaining privacy. The Bucketization technique is used in this paper with the intervention of the clustering method. The proposed work is divided into three stages: (i) Vertical and Horizontal partitioning (ii) Assigning Sensitive index to attributes in the cluster (iii) Verifying each cluster against privacy threshold (iv) Examining for privacy breach in Quasi Identifier (QI). To increase the utility of published data, the threshold value is determined based on the distribution of elements in each attribute, and the anonymization method is applied only to the specific QI element. As a result, the data utility has been improved. Finally, the evaluation results validated the design of paper and demonstrated that our design is effective in improving data utility.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4132 ◽  
Author(s):  
Ku Ku Abd. Rahim ◽  
I. Elamvazuthi ◽  
Lila Izhar ◽  
Genci Capi

Increasing interest in analyzing human gait using various wearable sensors, which is known as Human Activity Recognition (HAR), can be found in recent research. Sensors such as accelerometers and gyroscopes are widely used in HAR. Recently, high interest has been shown in the use of wearable sensors in numerous applications such as rehabilitation, computer games, animation, filmmaking, and biomechanics. In this paper, classification of human daily activities using Ensemble Methods based on data acquired from smartphone inertial sensors involving about 30 subjects with six different activities is discussed. The six daily activities are walking, walking upstairs, walking downstairs, sitting, standing and lying. It involved three stages of activity recognition; namely, data signal processing (filtering and segmentation), feature extraction and classification. Five types of ensemble classifiers utilized are Bagging, Adaboost, Rotation forest, Ensembles of nested dichotomies (END) and Random subspace. These ensemble classifiers employed Support vector machine (SVM) and Random forest (RF) as the base learners of the ensemble classifiers. The data classification is evaluated with the holdout and 10-fold cross-validation evaluation methods. The performance of each human daily activity was measured in terms of precision, recall, F-measure, and receiver operating characteristic (ROC) curve. In addition, the performance is also measured based on the comparison of overall accuracy rate of classification between different ensemble classifiers and base learners. It was observed that overall, SVM produced better accuracy rate with 99.22% compared to RF with 97.91% based on a random subspace ensemble classifier.


2018 ◽  
Vol 34 (10) ◽  
pp. 885-890 ◽  
Author(s):  
Bertrand Jordan

Senescent cells are involved in many age-related diseases, and the effects of their elimination by “senolytic” drugs is an active research field. A recent paper describes a convenient murine model of induced senescence and uses it to convincingly demonstrate the positive effects of senolytics on performance and lifespan. Clinical studies have already been initiated; this approach hold promise to eventually improve human “healthspan”.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1129
Author(s):  
Marvin Martens ◽  
Rob Stierum ◽  
Emma L. Schymanski ◽  
Chris T. Evelo ◽  
Reza Aalizadeh ◽  
...  

Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


2019 ◽  
Vol 07 (02) ◽  
pp. 1950001
Author(s):  
THABANG MOKOALELI-MOKOTELI ◽  
SHAUN RAMSUMAR ◽  
HIMA VADAPALLI

The success of investors in obtaining huge financial rewards from the stock market depends on their ability to predict the direction of the stock market index. The purpose of this study is to evaluate the efficacy of several ensemble prediction models (Boosted, RUS-Boosted, Subspace Disc, Bagged, and Subspace KNN) in predicting the daily direction of the Johannesburg Stock Exchange (JSE) All-Share index compared to other commonly used machine learning techniques including support vector machines (SVM), logistic regression and [Formula: see text]-nearest neighbor (KNN). The findings in this study show that, among all ensemble models, Boosted algorithm is the best performer followed by RUS-Boosted. When compared to the other techniques, ensemble technique (represented by Boosted) outperformed these techniques, followed by KNN, logistic regression and SVM, respectively. These findings suggest that investors should include ensemble models among the index prediction models if they want to make huge profits in the stock markets. However, not all investors can benefit from this as models may suffer from alpha decay as more and more investors use them, implying that the successful algorithms have limited shelf life.


2021 ◽  
Vol 21 (S1) ◽  
Author(s):  
Jie Su ◽  
Yi Cao ◽  
Yuehui Chen ◽  
Yahui Liu ◽  
Jinming Song

Abstract Background Protection of privacy data published in the health care field is an important research field. The Health Insurance Portability and Accountability Act (HIPAA) in the USA is the current legislation for privacy protection. However, the Institute of Medicine Committee on Health Research and the Privacy of Health Information recently concluded that HIPAA cannot adequately safeguard the privacy, while at the same time researchers cannot use the medical data for effective researches. Therefore, more effective privacy protection methods are urgently needed to ensure the security of released medical data. Methods Privacy protection methods based on clustering are the methods and algorithms to ensure that the published data remains useful and protected. In this paper, we first analyzed the importance of the key attributes of medical data in the social network. According to the attribute function and the main objective of privacy protection, the attribute information was divided into three categories. We then proposed an algorithm based on greedy clustering to group the data points according to the attributes and the connective information of the nodes in the published social network. Finally, we analyzed the loss of information during the procedure of clustering, and evaluated the proposed approach with respect to classification accuracy and information loss rates on a medical dataset. Results The associated social network of a medical dataset was analyzed for privacy preservation. We evaluated the values of generalization loss and structure loss for different values of k and a, i.e. $$k$$ k  = {3, 6, 9, 12, 15, 18, 21, 24, 27, 30}, a = {0, 0.2, 0.4, 0.6, 0.8, 1}. The experimental results in our proposed approach showed that the generalization loss approached optimal when a = 1 and k = 21, and structure loss approached optimal when a = 0.4 and k = 3. Conclusion We showed the importance of the attributes and the structure of the released health data in privacy preservation. Our method achieved better results of privacy preservation in social network by optimizing generalization loss and structure loss. The proposed method to evaluate loss obtained a balance between the data availability and the risk of privacy leakage.


Sign in / Sign up

Export Citation Format

Share Document