scholarly journals Printed text documents watermarking based on vertical word shift and word fragments brightness changing

Author(s):  
Dmitry Olegovich Obydenkov ◽  
Alexander Evgenevich Frolov ◽  
Yury Vitalievich Markin ◽  
Stanislav Alexandrovich Fomin ◽  
Boris Vladimirovich Kondrat’ev

This paper describes the results of the development of methods for marking text documents represented as a raster image. An important feature of the algorithms is the possibility wipe current document mark and embed another one. The study refers to structural marking algorithms based on vertical word shifts and brightness changes of certain areas of the words. Segmentation tools are used to obtain document layout, BCH codes for error correction, a likelihood maximization method for label extraction, and a neural network for perturbed words recovery. Testing has proved the practical applicability of the algorithms with printing and scanning.

2019 ◽  
Vol 8 (3) ◽  
pp. 6634-6643 ◽  

Opinion mining and sentiment analysis are valuable to extract the useful subjective information out of text documents. Predicting the customer’s opinion on amazon products has several benefits like reducing customer churn, agent monitoring, handling multiple customers, tracking overall customer satisfaction, quick escalations, and upselling opportunities. However, performing sentiment analysis is a challenging task for the researchers in order to find the users sentiments from the large datasets, because of its unstructured nature, slangs, misspells and abbreviations. To address this problem, a new proposed system is developed in this research study. Here, the proposed system comprises of four major phases; data collection, pre-processing, key word extraction, and classification. Initially, the input data were collected from the dataset: amazon customer review. After collecting the data, preprocessing was carried-out for enhancing the quality of collected data. The pre-processing phase comprises of three systems; lemmatization, review spam detection, and removal of stop-words and URLs. Then, an effective topic modelling approach Latent Dirichlet Allocation (LDA) along with modified Possibilistic Fuzzy C-Means (PFCM) was applied to extract the keywords and also helps in identifying the concerned topics. The extracted keywords were classified into three forms (positive, negative and neutral) by applying an effective machine learning classifier: Convolutional Neural Network (CNN). The experimental outcome showed that the proposed system enhanced the accuracy in sentiment analysis up to 6-20% related to the existing systems.


Micromachines ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 879
Author(s):  
Ruiquan He ◽  
Haihua Hu ◽  
Chunru Xiong ◽  
Guojun Han

The multilevel per cell technology and continued scaling down process technology significantly improves the storage density of NAND flash memory but also brings about a challenge in that data reliability degrades due to the serious noise. To ensure the data reliability, many noise mitigation technologies have been proposed. However, they only mitigate one of the noises of the NAND flash memory channel. In this paper, we consider all the main noises and present a novel neural network-assisted error correction (ANNAEC) scheme to increase the reliability of multi-level cell (MLC) NAND flash memory. To avoid using retention time as an input parameter of the neural network, we propose a relative log-likelihood ratio (LLR) to estimate the actual LLR. Then, we transform the bit detection into a clustering problem and propose to employ a neural network to learn the error characteristics of the NAND flash memory channel. Therefore, the trained neural network has optimized performances of bit error detection. Simulation results show that our proposed scheme can significantly improve the performance of the bit error detection and increase the endurance of NAND flash memory.


2021 ◽  
pp. 1-11
Author(s):  
Yuan Zou ◽  
Daoli Yang ◽  
Yuchen Pan

Gross domestic product (GDP) is the most widely-used tool for measuring the overall situation of a country’s economic activity within a specified period of time. A more accurate forecasting of GDP based on standardized procedures with known samples available is conducive to guide decision making of government, enterprises and individuals. This study devotes to enhance the accuracy regarding GDP forecasting with given sample of historical data. To achieve this purpose, the study incorporates artificial neural network (ANN) into grey Markov chain model to modify the residual error, thus develops a novel hybrid model called grey Markov chain with ANN error correction (abbreviated as GMCM_ANN), which assembles the advantages of three components to fit nonlinear forecasting with limited sample sizes. The new model has been tested by adopting the historical data, which includes the original GDP data of the United States, Japan, China and India from 2000 to 2019, and also provides predications on four countries’ GDP up to 2022. Four models including autoregressive integrated moving average model, back-propagation neural network, the traditional GM(1,1) and grey Markov chain model are as benchmarks for comparison of the predicted accuracy and application scope. The obtained results are satisfactory and indicate superior forecasting performance of the proposed approach in terms of accuracy and universality.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shaobo Lu

Based on the BP neural network and the ARIMA model, this paper predicts the nonlinear residual of GDP and adds the predicted values of the two models to obtain the final predicted value of the model. First, the focus is on the ARMA model in the univariate time series. However, in real life, forecasts are often affected by many factors, so the following introduces the ARIMAX model in the multivariate time series. In the prediction process, the network structure and various parameters of the neural network are not given in a systematic way, so the operation of the neural network is affected by many factors. Each forecasting method has its scope of application and also has its own weaknesses caused by the characteristics of its own model. Secondly, this paper proposes an effective combination method according to the GDP characteristics and builds an improved algorithm BP neural network price prediction model, the research on the combination of GDP prediction model is currently mostly focused on the weighted form, and this article proposes another combination, namely, error correction. According to the price characteristics, we determine the appropriate number of hidden layer nodes and build a BP neural network price prediction model based on the improved algorithm. Validation of examples shows that the error-corrected GDP forecast model is also better than the weighted GDP forecast model, which shows that error correction is also a better combination of forecasting methods. The forecast results of BP neural network have lower errors and monthly prices. The relative error of prediction is about 2.5%. Through comparison with the prediction results of the ARIMA model, in the daily price prediction, the relative error of the BP neural network prediction is 1.5%, which is lower than the relative error of the ARIMA model of 2%.


2021 ◽  
Vol 3 (4) ◽  
pp. 922-945
Author(s):  
Shaw-Hwa Lo ◽  
Yiqiao Yin

Text classification is a fundamental language task in Natural Language Processing. A variety of sequential models are capable of making good predictions, yet there is a lack of connection between language semantics and prediction results. This paper proposes a novel influence score (I-score), a greedy search algorithm, called Backward Dropping Algorithm (BDA), and a novel feature engineering technique called the “dagger technique”. First, the paper proposes to use the novel influence score (I-score) to detect and search for the important language semantics in text documents that are useful for making good predictions in text classification tasks. Next, a greedy search algorithm, called the Backward Dropping Algorithm, is proposed to handle long-term dependencies in the dataset. Moreover, the paper proposes a novel engineering technique called the “dagger technique” that fully preserves the relationship between the explanatory variable and the response variable. The proposed techniques can be further generalized into any feed-forward Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), and any neural network. A real-world application on the Internet Movie Database (IMDB) is used and the proposed methods are applied to improve prediction performance with an 81% error reduction compared to other popular peers if I-score and “dagger technique” are not implemented.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Savvas Varsamopoulos ◽  
Koen Bertels ◽  
Carmen G. Almudever

Abstract There has been a rise in decoding quantum error correction codes with neural network–based decoders, due to the good decoding performance achieved and adaptability to any noise model. However, the main challenge is scalability to larger code distances due to an exponential increase of the error syndrome space. Note that successfully decoding the surface code under realistic noise assumptions will limit the size of the code to less than 100 qubits with current neural network–based decoders. Such a problem can be tackled by a distributed way of decoding, similar to the renormalization group (RG) decoders. In this paper, we introduce a decoding algorithm that combines the concept of RG decoding and neural network–based decoders. We tested the decoding performance under depolarizing noise with noiseless error syndrome measurements for the rotated surface code and compared against the blossom algorithm and a neural network–based decoder. We show that a similar level of decoding performance can be achieved between all tested decoders while providing a solution to the scalability issues of neural network–based decoders.


2020 ◽  
Vol 9 (2) ◽  
pp. 74
Author(s):  
Eric Hsueh-Chan Lu ◽  
Jing-Mei Ciou

With the rapid development of surveying and spatial information technologies, more and more attention has been given to positioning. In outdoor environments, people can easily obtain positioning services through global navigation satellite systems (GNSS). In indoor environments, the GNSS signal is often lost, while other positioning problems, such as dead reckoning and wireless signals, will face accumulated errors and signal interference. Therefore, this research uses images to realize a positioning service. The main concept of this work is to establish a model for an indoor field image and its coordinate information and to judge its position by image eigenvalue matching. Based on the architecture of PoseNet, the image is input into a 23-layer convolutional neural network according to various sizes to train end-to-end location identification tasks, and the three-dimensional position vector of the camera is regressed. The experimental data are taken from the underground parking lot and the Palace Museum. The preliminary experimental results show that this new method designed by us can effectively improve the accuracy of indoor positioning by about 20% to 30%. In addition, this paper also discusses other architectures, field sizes, camera parameters, and error corrections for this neural network system. The preliminary experimental results show that the angle error correction method designed by us can effectively improve positioning by about 20%.


Sign in / Sign up

Export Citation Format

Share Document