fast classification
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 40)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
pp. 100036
Author(s):  
Syed Afaq Ali Shah ◽  
Hao Luo ◽  
Putu Dita Pickupana ◽  
Alexander Ekeze ◽  
Ferdous Sohel ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Nana Liu

Today’s E-commerce is hot, while the categorization of goods cannot be handled better, especially to achieve the demand of multiple tasks. In this paper, we propose a multitask learning model based on a CNN in parallel with a BiLSTM optimized by an attention mechanism as a training network for E-commerce. The results showed that the fast classification task of E-commerce was performed using only 10% of the total number of products. The experimental results show that the accuracy of w-item2vec for product classification can be close to 50% with only 10% of the training data. Both models significantly outperform other models in terms of classification accuracy.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032055
Author(s):  
Y A Izotov ◽  
A A Velichko ◽  
P P Boriskov

Abstract The paper presents a method for forming a reservoir of a neural network LogNNet using a linear congruent pseudo-random number generator. This method made it possible to reduce the MNIST handwritten digit recognition time on the low-memory Arduino Uno board to 0.28 s for the LogNNet 784:20:10 configurations, with a classification accuracy of ~ 82%. It was found that the computations with integers gives an increase in the speed of the algorithm by more than 2 times in comparison with the algorithm using the real type when generating a chaotic time series. The developed method can be used to accelerate the calculations of edge devices in the field of “Internet of Things”, for example, for mobile medical devices, autonomous vehicle control systems and bionic suit control.


Author(s):  
Michel Andre L .Vinagreiro ◽  
Edson C. Kitani ◽  
Armando Antonio M. Lagana ◽  
Leopoldo R. Yoshioka

Computer vision plays a crucial role in Advanced Assistance Systems. Most computer vision systems are based on Deep Convolutional Neural Networks (deep CNN) architectures. However, the high computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on architecture reduction found in the literaturehave not yet been achievedsatisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. After the training process, the generated features maps are used to create vector feature space. We use this new vector space to make projections of any new sample to classify them. Our method, named AMFC, uses the transfer learning from pre-trained CNN to reduce the classification time of new sample image, with minimal accuracy loss. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark, we compared the classification time of the original VGG-16 model with the AMFCmethod, and our method is, on average, 17 times faster. The fast classification time reduces the computational and memory demands in embedded applications requiring a large CNN architecture.


2021 ◽  
Author(s):  
Mohamed Amir Alaa Belmekki ◽  
Stephen McLaughlin ◽  
Abderrahim Halimi

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yuxian Huang ◽  
Geng Yang ◽  
Yahong Xu ◽  
Hao Zhou

In big data era, massive and high-dimensional data is produced at all times, increasing the difficulty of analyzing and protecting data. In this paper, in order to realize dimensionality reduction and privacy protection of data, principal component analysis (PCA) and differential privacy (DP) are combined to handle these data. Moreover, support vector machine (SVM) is used to measure the availability of processed data in our paper. Specifically, we introduced differential privacy mechanisms at different stages of the algorithm PCA-SVM and obtained the algorithms DPPCA-SVM and PCADP-SVM. Both algorithms satisfy ε , 0 -DP while achieving fast classification. In addition, we evaluate the performance of two algorithms in terms of noise expectation and classification accuracy from the perspective of theoretical proof and experimental verification. To verify the performance of DPPCA-SVM, we also compare our DPPCA-SVM with other algorithms. Results show that DPPCA-SVM provides excellent utility for different data sets despite guaranteeing stricter privacy.


2021 ◽  
Vol 5 (2(61)) ◽  
pp. 6-8
Author(s):  
Olena Hryshchenko ◽  
Vadym Yaremenko

The object of research is the methods of fast classification for solving text data classification problems. The need for this study is due to the rapid growth of textual data, both in digital and printed forms. Thus, there is a need to process such data using software, since human resources are not able to process such an amount of data in full. A large number of data classification approaches have been developed. The conducted research is based on the application of the following methods of classification of text data: Bloom filter, naive Bayesian classifier and neural networks to a set of text data in order to classify them into categories. Each method has both disadvantages and advantages. This paper will reflect the strengths and weaknesses of each method on a specific example. These algorithms were comparatively among themselves in terms of speed and efficiency, that is, the accuracy of determining the belonging of a text to a certain class of classification. The work of each method was considered on the same data sets with a change in the amount of training and test data, as well as with a change in the number of classification groups. The dataset used contains the following classes: world, business, sports, and science and technology. In real conditions of the classification of such data, the number of categories is much larger than that considered in the work, and may have subcategories in its composition. In the course of this study, each method was analyzed using different parameter values to obtain the best result. Analyzing the results obtained, the best results for the classification of text data were obtained using a neural network.


2021 ◽  
Author(s):  
Michel Andre L .Vinagreiro ◽  
Edson C. Kitani ◽  
Armando Antonio M. Lagana ◽  
Leopoldo R. Yoshioka

Computer vision plays a crucial role in ADAS security and navigation, as most systems are based on deep CNN architectures the computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on acceleration techniques found in the literature have not yet been achieved satisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. When the training process ends, the generated maps are used to create vector feature space. We use this new vector space to make projections of any new sample in order to classify them. Our method, named MFS-CNN, uses the transfer learning from pre trained CNN to reduce the classification time of new sample image, with minimal loss in accuracy. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark we compared the classification time of original VGG-16 model with the MFS-CNN method and our method is, on average, 17 times faster. The fast classification time reduces the computational and memories demand in embedded applications that requires a large CNN architecture.


Sign in / Sign up

Export Citation Format

Share Document