vector dimension
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 3)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
V. V. Pichkur ◽  
D. A. Mazur ◽  
V. V. Sobchuk

The paper proposes an analysis of controllability of a linear discrete system with change of the state vector dimension. We offer necessary and sufficient conditions of controllability and design the control that guarantees the decision of a problem of moving of such system to an arbitrary final state. It provides functional stability of technological processes described by a linear discrete system with change of the state vector dimension.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Chaochen Wang ◽  
Yuming Bo ◽  
Changhui Jiang

Global Positioning System (GPS) and strap-down inertial navigation system (SINS) are recognized as highly complementary and widely employed in the community. The GPS has the advantage of providing precise navigation solutions without divergence, but the GPS signals might be blocked and attenuated. The SINS is a totally self-contained navigation system which is hardly disturbed. The GPS/SINS integration system could utilize the advantages of both the GPS and SINS and provide more reliable navigation solutions. According to the data fusion strategies, the GPS/SINS integrated system could be divided into three different modes: loose, tight, and ultratight integration (LI, TI, and UTC). In the loose integration mode, position and velocity difference from the GPS and SINS are employed to compose measurement vector, in which the vector dimension has nothing to do with the amount of the available satellites. However, in the tight and ultratight modes, difference of pseudoranges and pseudorange rates from the GPS and SINS are employed to compose the measurement vector, in which the measurement vector dimension increases with the amount of available satellites. In addition, compared with the loose integration mode, clock bias and drift are included in the integration state model. The two characteristics magnify the computation load of the tight and ultratight modes. In this paper, a new efficient filter model was proposed and evaluated. Two schemes were included in this design for reducing the computation load. Firstly, a difference between pseudorange measurements was determined, by which clock bias and drift were excluded from the integration state model. This step reduced the dimension of the state vector. Secondly, the integration filter was divided into two subfilters: pseudorange subfilter and pseudorange rate subfilter. A federated filter was utilized to estimate the state errors optimally. In the second step, the two subfilters could run in parallel and the measurement vector was divided into two subvectors with lower dimension. A simulation implemented in MATLAB software was conducted to evaluate the performance of the new efficient integration method in UTC. The simulation results showed that the method could reduce the computation load with the navigation solutions almost unchanged.


2020 ◽  
Vol 7 (1) ◽  
pp. 140
Author(s):  
Dian Chusnul Hidayati ◽  
Said Al Faraby ◽  
Adiwijaya Adiwijaya

Hadith is the second source of Islamic law after Al-Quran, making it important to study. However, there are some difficulties in learning hadith, such as to determine which hadith belongs to the topic of suggestions, prohibitions, and information. This certainly obstructs the hadith learning process, especially for Muslims. Therefore, it is necessary to classify hadiths into the topic of suggestions, prohibitions, information, and a combination of the three topics which also called as multi-label topic. The classification can be done with the K-Nearest Neighbor, it is one of the best methods in the Vector Space Model and is the simplest but quite effective method. However, the KNN has a lack in dealing with high vector dimension, resulting in the long time computing classification. For that reason, it is necessary to classify Sahih Bukhari's Hadiths into the topic of recommendations, prohibitions, and information using the Latent-Semantic Analysis - K-nearest Neighbor (LSA-KNN) method. Binary Relevance method is also employed in this research to process the multi-label data. This research shows that the performance of LSA-KNN is 90.28% with the computation time is 19 minutes 21 seconds and the performance of KNN is 90.23% with the computation time is 37 minutes 06 seconds, which means that the LSA-KNN method has a better performance than KNN


2018 ◽  
Vol 16 (05) ◽  
pp. 1850021 ◽  
Author(s):  
Yanbu Guo ◽  
Bingyi Wang ◽  
Weihua Li ◽  
Bei Yang

Protein secondary structure prediction (PSSP) is an important research field in bioinformatics. The representation of protein sequence features could be treated as a matrix, which includes the amino-acid residue (time-step) dimension and the feature vector dimension. Common approaches to predict secondary structures only focus on the amino-acid residue dimension. However, the feature vector dimension may also contain useful information for PSSP. To integrate the information on both dimensions of the matrix, we propose a hybrid deep learning framework, two-dimensional convolutional bidirectional recurrent neural network (2C-BRNN), for improving the accuracy of 8-class secondary structure prediction. The proposed hybrid framework is to extract the discriminative local interactions between amino-acid residues by two-dimensional convolutional neural networks (2DCNNs), and then further capture long-range interactions between amino-acid residues by bidirectional gated recurrent units (BGRUs) or bidirectional long short-term memory (BLSTM). Specifically, our proposed 2C-BRNNs framework consists of four models: 2DConv-BGRUs, 2DCNN-BGRUs, 2DConv-BLSTM and 2DCNN-BLSTM. Among these four models, the 2DConv- models only contain two-dimensional (2D) convolution operations. Moreover, the 2DCNN- models contain 2D convolutional and pooling operations. Experiments are conducted on four public datasets. The experimental results show that our proposed 2DConv-BLSTM model performs significantly better than the benchmark models. Furthermore, the experiments also demonstrate that the proposed models can extract more meaningful features from the matrix of proteins, and the feature vector dimension is also useful for PSSP. The codes and datasets of our proposed methods are available at https://github.com/guoyanb/JBCB2018/ .


2018 ◽  
Vol 18 (03) ◽  
pp. 1850014 ◽  
Author(s):  
Soukaina Benchaou ◽  
M'Barek Nasri ◽  
Ouafae El Melhaoui

Handwriting, printed character recognition is an interesting area in image processing and pattern recognition. It consists of a number of phases which are preprocessing, feature extraction and classification. The phase of feature extraction is carried out by different techniques; zoning, profile projection, and ameliored Freeman. The high number of features vector can increase the error rate and the training time. So, to solve this problem, we present in this paper a new method of selecting attributes based on the evolution strategy in order to reduce the feature vector dimension and to improve the recognition rate. The proposed model has been applied to recognize numerals and it obtained a better results and showed more robustness than without the selection system.


2017 ◽  
Vol 2017 ◽  
pp. 1-17
Author(s):  
Hongyin Xiang ◽  
Jinsha Yuan ◽  
Sizu Hou

A pixel-based pixel-value-ordering (PPVO) has been used for reversible data hiding to generate large embedding capacity and high-fidelity marked images. The original PPVO invented an effective prediction strategy in pixel-by-pixel manner. This paper extends PPVO and proposes an obtuse angle prediction (OAP) scheme, in which each pixel is predicted by context pixels with better distribution. Moreover, for evaluating prediction power, a mathematical model is constructed and three factors, including the context vector dimension, the maximum prediction angle, and the current pixel location, are analyzed in detail. Experimental results declare that the proposed OAP approach can achieve higher PSNR values than PPVO and some other state-of-the-art methods, especially in the moderate and large payload sizes.


2015 ◽  
Vol 15 (12) ◽  
pp. 7039-7048 ◽  
Author(s):  
A. J. Turner ◽  
D. J. Jacob

Abstract. Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.


2015 ◽  
Vol 15 (1) ◽  
pp. 1001-1026 ◽  
Author(s):  
A. J. Turner ◽  
D. J. Jacob

Abstract. Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.


Sign in / Sign up

Export Citation Format

Share Document