Accuracy improvement of Thai food image recognition using deep convolutional neural networks

Author(s):  
Chakkrit Termritthikun ◽  
Surachet Kanprachar
2018 ◽  
Vol 22 (S4) ◽  
pp. 9371-9383 ◽  
Author(s):  
Xiaoning Zhu ◽  
Qingyue Meng ◽  
Bojian Ding ◽  
Lize Gu ◽  
Yixian Yang

2019 ◽  
Vol 34 (3) ◽  
pp. 207-215 ◽  
Author(s):  
Cheol-Hee Lee ◽  
Yoon-Ju Jeong ◽  
Taeho Kim ◽  
Jae-Hyeon Park ◽  
Seongbin Bak ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yongjin Hu ◽  
Jin Tian ◽  
Jun Ma

Network traffic classification technologies could be used by attackers to implement network monitoring and then launch traffic analysis attacks or website fingerprint attacks. In order to prevent such attacks, a novel way to generate adversarial samples of network traffic from the perspective of the defender is proposed. By adding perturbation to the normal network traffic, a kind of adversarial network traffic is formed, which will cause misclassification when the attackers are implementing network traffic classification with deep convolutional neural networks (CNN) as a classification model. The paper uses the concept of adversarial samples in image recognition for reference to the field of network traffic classification and chooses several different methods to generate adversarial samples of network traffic. The experiment, in which the LeNet-5 CNN is selected as a classification model used by attackers and Vgg16 CNN is selected as the model to test the transferability of the adversarial network traffic generated, shows the effect of the adversarial network traffic samples.


Author(s):  
Yang He ◽  
Guoliang Kang ◽  
Xuanyi Dong ◽  
Yanwei Fu ◽  
Yi Yang

This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pretrained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42% FLOPs on ResNet-101 with even 0.2% top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/softfilter-pruning


Sign in / Sign up

Export Citation Format

Share Document