scholarly journals A protection method of trained CNN model with a secret key from unauthorized access

Author(s):  
AprilPyone Maungmaung ◽  
Hitoshi Kiya

In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1938
Author(s):  
Linling Qiu ◽  
Han Li ◽  
Meihong Wang ◽  
Xiaoli Wang

With its increasing incidence, cancer has become one of the main causes of worldwide mortality. In this work, we mainly propose a novel attention-based neural network model named Gated Graph ATtention network (GGAT) for cancer prediction, where a gating mechanism (GM) is introduced to work with the attention mechanism (AM), to break through the previous work’s limitation of 1-hop neighbourhood reasoning. In this way, our GGAT is capable of fully mining the potential correlation between related samples, helping for improving the cancer prediction accuracy. Additionally, to simplify the datasets, we propose a hybrid feature selection algorithm to strictly select gene features, which significantly reduces training time without affecting prediction accuracy. To the best of our knowledge, our proposed GGAT achieves the state-of-the-art results in cancer prediction task on LIHC, LUAD, KIRC compared to other traditional machine learning methods and neural network models, and improves the accuracy by 1% to 2% on Cora dataset, compared to the state-of-the-art graph neural network methods.


2021 ◽  
Vol 1074 (1) ◽  
pp. 012025
Author(s):  
A Poornima ◽  
M Shyamala Devi ◽  
M Sumithra ◽  
Mullaguri Venkata Bharath ◽  
Swathi ◽  
...  

Author(s):  
Robert J. O’Shea ◽  
Amy Rose Sharkey ◽  
Gary J. R. Cook ◽  
Vicky Goh

Abstract Objectives To perform a systematic review of design and reporting of imaging studies applying convolutional neural network models for radiological cancer diagnosis. Methods A comprehensive search of PUBMED, EMBASE, MEDLINE and SCOPUS was performed for published studies applying convolutional neural network models to radiological cancer diagnosis from January 1, 2016, to August 1, 2020. Two independent reviewers measured compliance with the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Compliance was defined as the proportion of applicable CLAIM items satisfied. Results One hundred eighty-six of 655 screened studies were included. Many studies did not meet the criteria for current design and reporting guidelines. Twenty-seven percent of studies documented eligibility criteria for their data (50/186, 95% CI 21–34%), 31% reported demographics for their study population (58/186, 95% CI 25–39%) and 49% of studies assessed model performance on test data partitions (91/186, 95% CI 42–57%). Median CLAIM compliance was 0.40 (IQR 0.33–0.49). Compliance correlated positively with publication year (ρ = 0.15, p = .04) and journal H-index (ρ = 0.27, p < .001). Clinical journals demonstrated higher mean compliance than technical journals (0.44 vs. 0.37, p < .001). Conclusions Our findings highlight opportunities for improved design and reporting of convolutional neural network research for radiological cancer diagnosis. Key Points • Imaging studies applying convolutional neural networks (CNNs) for cancer diagnosis frequently omit key clinical information including eligibility criteria and population demographics. • Fewer than half of imaging studies assessed model performance on explicitly unobserved test data partitions. • Design and reporting standards have improved in CNN research for radiological cancer diagnosis, though many opportunities remain for further progress.


2021 ◽  
pp. 188-198

The innovations in advanced information technologies has led to rapid delivery and sharing of multimedia data like images and videos. The digital steganography offers ability to secure communication and imperative for internet. The image steganography is essential to preserve confidential information of security applications. The secret image is embedded within pixels. The embedding of secret message is done by applied with S-UNIWARD and WOW steganography. Hidden messages are reveled using steganalysis. The exploration of research interests focused on conventional fields and recent technological fields of steganalysis. This paper devises Convolutional neural network models for steganalysis. Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. The Convolutional neural network is used to extract spatio-temporal information or features and classification. We have compared steganalysis outcome with AlexNet and SRNeT with same dataset. The stegnalytic error rates are compared with different payloads.


2021 ◽  
Author(s):  
Muhammad Shahroz Nadeem ◽  
Sibt Hussain ◽  
Fatih Kurugollu

This paper solves the textual deblurring problem, In this paper we propose a new loss function, we provide empirical evaluation of the design choices based on which a memory friendly CNN model is proposed, that performs better then the state of the art CNN method.


Sign in / Sign up

Export Citation Format

Share Document