code vector
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shengzi Sun ◽  
Binghui Guo ◽  
Zhilong Mi ◽  
Zhiming Zheng

AbstractCross-modal retrieval has become a topic of popularity, since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention. Traditional single-modal methods reconstruct the original information and lack of considering the semantic similarity between different data. In this work, a cross-modal semantic autoencoder with embedding consensus (CSAEC) is proposed, mapping the original data to a low-dimensional shared space to retain semantic information. Considering the similarity between the modalities, an automatic encoder is utilized to associate the feature projection to the semantic code vector. In addition, regularization and sparse constraints are applied to low-dimensional matrices to balance reconstruction errors. The high dimensional data is transformed into semantic code vector. Different models are constrained by parameters to achieve denoising. The experiments on four multi-modal data sets show that the query results are improved and effective cross-modal retrieval is achieved. Further, CSAEC can also be applied to fields related to computer and network such as deep and subspace learning. The model breaks through the obstacles in traditional methods, using deep learning methods innovatively to convert multi-modal data into abstract expression, which can get better accuracy and achieve better results in recognition.


Author(s):  
Nikita A. Pchelin ◽  
◽  
Mohammed A. Y. Damdam ◽  
Ali S.A. Al-Mesri ◽  
Aleksandr A. Brynza ◽  
...  

The use of noise-tolerant coding in modern communication systems remains the only means of increasing the efficient energy of such systems. This parameter tends to increase in conditions when the receiver of the communication system is able to correct errors of a large multiplicity. At the same time, the existing experience of using various methods for decoding the received data to achieve such a goal in the format of algebraic or iterative procedures does not give a noticeable effect and leads to a large time cost and an exponential increase in the complexity of implementing the decoder processor. The reason for this situation is the passive position of the receiver, which, when processing each code vector, remains a fixator of the picture that occurred in the communication channel and, in general, by compiling a system of linear equations and then solving it, tries to identify the error vector. Some exceptions are permutation decoding systems, which, by selecting and using reliable characters from the number received at the reception, simulate the operation of their transmitter and compare the received (almost error-free) result of such encoding with the received combination [1, 2]. With the growing influence of destructive factors, such methods are ineffective. A natural question arises: are modern solutions in neural network technologies capable of improving the characteristics of code vector recognition systems in order to obtain acceptable machine time costs in order to achieve an increase in the energy characteristics of communication systems.


2019 ◽  
Vol 11 (6) ◽  
pp. 619 ◽  
Author(s):  
Chengming Zhang ◽  
Yingjuan Han ◽  
Feng Li ◽  
Shuai Gao ◽  
Dejuan Song ◽  
...  

When the spatial distribution of winter wheat is extracted from high-resolution remote sensing imagery using convolutional neural networks (CNN), field edge results are usually rough, resulting in lowered overall accuracy. This study proposed a new per-pixel classification model using CNN and Bayesian models (CNN-Bayesian model) for improved extraction accuracy. In this model, a feature extractor generates a feature vector for each pixel, an encoder transforms the feature vector of each pixel into a category-code vector, and a two-level classifier uses the difference between elements of category-probability vectors as the confidence value to perform per-pixel classifications. The first level is used to determine the category of a pixel with high confidence, and the second level is an improved Bayesian model used to determine the category of low-confidence pixels. The CNN-Bayesian model was trained and tested on Gaofen 2 satellite images. Compared to existing models, our approach produced an improvement in overall accuracy, the overall accuracy of SegNet, DeepLab, VGG-Ex, and CNN-Bayesian was 0.791, 0.852, 0.892, and 0.946, respectively. Thus, this approach can produce superior results when winter wheat spatial distribution is extracted from satellite imagery.


2015 ◽  
Vol 22 (4) ◽  
pp. 494-498 ◽  
Author(s):  
Shahrouz Khalili ◽  
Osvaldo Simeone ◽  
Alexander M. Haimovich

Sign in / Sign up

Export Citation Format

Share Document