Handwriting-Assistant

Author(s):  
Yanling Bu ◽  
Lei Xie ◽  
Yafeng Yin ◽  
Chuyu Wang ◽  
Jingyi Ning ◽  
...  

Pen-based handwriting has become one of the major human-computer interaction methods. Traditional approaches either require writing on the specific supporting device like the touch screen, or limit the way of using the pen to pure rotation or translation. In this paper, we propose Handwriting-Assistant, to capture the free handwriting of ordinary pens on regular planes with mm-level accuracy. By attaching the inertial measurement unit (IMU) to the pen tail, we can infer the handwriting on the notebook, blackboard or other planes. Particularly, we build a generalized writing model to correlate the rotation and translation of IMU with the tip displacement comprehensively, thereby we can infer the tip trace accurately. Further, to display the effective handwriting during the continuous writing process, we leverage the principal component analysis (PCA) based method to detect the candidate writing plane, and then exploit the distance variation of each segment relative to the plane to distinguish on-plane strokes. Moreover, our solution can apply to other rigid bodies, enabling smart devices embedded with IMUs to act as handwriting tools. Experiment results show that our approach can capture the handwriting with high accuracy, e.g., the average tracking error is 1.84mm for letters with the size of about 2cmx1cm, and the average character recognition rate of recovered single letters achieves 98.2% accuracy of the ground-truth recorded by touch screen.

Author(s):  
Ahmed M. Zeki ◽  
Mohamad S. Zakaria ◽  
Choong-Yeun Liong

The cursive nature of Arabic writing is the main challenge to Arabic Optical Character Recognition developer. Methods to segment Arabic words into characters have been proposed. This paper provides a comprehensive review of the methods proposed by researchers to segment Arabic characters. The segmentation methods are categorized into nine different methods based on techniques used. The advantages and drawbacks of each are presented and discussed. Most researchers did not report the segmentation accuracy in their research; instead, they reported the overall recognition rate which did not reflect the influence of each sub-stage on the final recognition rate. The size of the training/testing data was not large enough to be generalized. The field of Arabic Character Recognition needs a standard set of test documents in both image and character formats, together with the ground truth and a set of performance evaluation tools, which would enable comparing the performance of different algorithms. As each method has its strengths, a hybrid segmentation approach is a promising method. The paper concludes that there is still no perfect segmentation method for ACR and much opportunity for research in this area.


Author(s):  
Abhisek Sethy ◽  
Prashanta Kumar Patra

Offline handwritten recognition system for Odia characters has received attention in the last few years. Although the recent research showed that there has been lots of work reported in different language, there is limited research carried out in Odia character recognition. Most of Odia characters are round in nature, similar in orientation and size also, which increases the ambiguity among characters. This chapter has harnessed the rectangle histogram-oriented gradient (R-HOG) for feature extraction method along with the principal component analysis. This gradient-based approach has been able to produce relevant features of individual ones in to the proposed model and helps to achieve high recognition rate. After certain simulations, the respective analysis of classifier shows that SVM performed better than quadratic. Among them, SVM produces with 98.8% and QC produces 96.8%, respectively, as recognition rate. In addition to it, the authors have also performed the 10-fold cross-validation to make the system more robust.


Handwritten Character Recognition is most challenging area of research, in which for various aspects a little enhancement can be always achieved. It is due to the irregularity of writing and shapes of different class user’s orientation affects the recognition rate. In this paper we have taken the complexity of Odia handwritten character recognition and successfully resolve with Principal Component Analysis (PCA). Here we had adopted a model in which the importance of symmetric axis chords in recognition of unconstrained handwritten characters is established. This symmetric axis chords are drawn along both row-wise and column-wise among the points one end to other. In addition to we have calculated the statistical feature as Euclidian distance, Hamilton distance which drawn from the midpoint of the symmetric chord to nearest pixel of the character. Apart from it we have also reported the angular values from the centroid of the image to the character pixel. This empirical model also harnessed the PCA over the feature set and perform the dimension reduction to the feature set which later termed as the key feature set. A certain series of experiment was carried on for the proper implementation of proposed technique, henceforth we have taken the standard Handwritten Database from various research institutes. Lastly on simulation analysis Radial Basis Function Neural Network (RBFNN) has been reported as to achieve high recognition rate through Gaussian kernel and a comparison among them has also reported here with.


2011 ◽  
Vol 2 (4) ◽  
pp. 48-82 ◽  
Author(s):  
Ahmed M. Zeki ◽  
Mohamad S. Zakaria ◽  
Choong-Yeun Liong

The cursive nature of Arabic writing is the main challenge to Arabic Optical Character Recognition developer. Methods to segment Arabic words into characters have been proposed. This paper provides a comprehensive review of the methods proposed by researchers to segment Arabic characters. The segmentation methods are categorized into nine different methods based on techniques used. The advantages and drawbacks of each are presented and discussed. Most researchers did not report the segmentation accuracy in their research; instead, they reported the overall recognition rate which did not reflect the influence of each sub-stage on the final recognition rate. The size of the training/testing data was not large enough to be generalized. The field of Arabic Character Recognition needs a standard set of test documents in both image and character formats, together with the ground truth and a set of performance evaluation tools, which would enable comparing the performance of different algorithms. As each method has its strengths, a hybrid segmentation approach is a promising method. The paper concludes that there is still no perfect segmentation method for ACR and much opportunity for research in this area.


Author(s):  
MYUNG-CHEOL ROH ◽  
SEONG-WHAN LEE

Human face is one of the most common and useful keys to a person's identity. Although, a number of face recognition algorithms have been proposed, many researchers believe that the technology should be improved further in order to overcome the instability caused by variable illuminations, expressions, poses and accessories. To analyze these face recognition algorithm, it is indispensable to collect various data as much as possible. Face databases such as CMU PIE (USA), FERET (USA), AR Face DB (USA) and XM2VTS (UK) are the representative ones commonly used. However, many databases do not provide adequately annotated information of the pose angle, illumination angle, illumination color and ground-truth. Mostly, they do not include large enough number of images and video data taken under various environments. Furthermore, the faces on these databases have different characteristics from those of Asian. Thus, we have designed and constructed a Korean Face Database (KFDB) which includes not only images but also video clips, ground-truth information of facial feature points and descriptions of subjects and environment conditions so that it can be used for general purposes. In this paper, we present the KFDB which contains image and video data for 1920 subjects and has been constructed in 3 years (sessions). We also present recognition results by CM (Correlation Matching) and PCA (Principal Component Analysis) which are used as baseline algorithms upon CMU PIE and KFDB, so as to understand how recognition rate is changed by altering image taking conditions.


Author(s):  
Manish M. Kayasth ◽  
Bharat C. Patel

The entire character recognition system is logically characterized into different sections like Scanning, Pre-processing, Classification, Processing, and Post-processing. In the targeted system, the scanned image is first passed through pre-processing modules then feature extraction, classification in order to achieve a high recognition rate. This paper describes mainly on Feature extraction and Classification technique. These are the methodologies which play an important role to identify offline handwritten characters specifically in Gujarati language. Feature extraction provides methods with the help of which characters can identify uniquely and with high degree of accuracy. Feature extraction helps to find the shape contained in the pattern. Several techniques are available for feature extraction and classification, however the selection of an appropriate technique based on its input decides the degree of accuracy of recognition. 


2019 ◽  
Vol 13 (2) ◽  
pp. 136-141 ◽  
Author(s):  
Abhisek Sethy ◽  
Prashanta Kumar Patra ◽  
Deepak Ranjan Nayak

Background: In the past decades, handwritten character recognition has received considerable attention from researchers across the globe because of its wide range of applications in daily life. From the literature, it has been observed that there is limited study on various handwritten Indian scripts and Odia is one of them. We revised some of the patents relating to handwritten character recognition. Methods: This paper deals with the development of an automatic recognition system for offline handwritten Odia character recognition. In this case, prior to feature extraction from images, preprocessing has been done on the character images. For feature extraction, first the gray level co-occurrence matrix (GLCM) is computed from all the sub-bands of two-dimensional discrete wavelet transform (2D DWT) and thereafter, feature descriptors such as energy, entropy, correlation, homogeneity, and contrast are calculated from GLCMs which are termed as the primary feature vector. In order to further reduce the feature space and generate more relevant features, principal component analysis (PCA) has been employed. Because of the several salient features of random forest (RF) and K- nearest neighbor (K-NN), they have become a significant choice in pattern classification tasks and therefore, both RF and K-NN are separately applied in this study for segregation of character images. Results: All the experiments were performed on a system having specification as windows 8, 64-bit operating system, and Intel (R) i7 – 4770 CPU @ 3.40 GHz. Simulations were conducted through Matlab2014a on a standard database named as NIT Rourkela Odia Database. Conclusion: The proposed system has been validated on a standard database. The simulation results based on 10-fold cross-validation scenario demonstrate that the proposed system earns better accuracy than the existing methods while requiring least number of features. The recognition rate using RF and K-NN classifier is found to be 94.6% and 96.4% respectively.


Author(s):  
SHENG-LIN CHOU ◽  
WEN-HSIANG TSAI

The problem of handwritten Chinese character recognition is solved by matching character stroke segments using an iteration scheme. Length and orientation similarity properties, and coordinate overlapping ratios are used to define a measure of similarity between any two stroke segments. The initial measures of similarity between the stroke segments of the input and template characters are used to set up a match network which includes all the match relationships between the input and template stroke segments. Based on the concept of at-most-one to one mapping an iteration scheme is employed to adjust the match relationships, using the contextual information implicitly contained in the match network, so that the match relationships can get into a stable state. From the final match relationships, matched stroke-segment pairs are determined by a mutually-best match strategy and the degree of similarity between the input and each template character is evaluated accordingly. Certain structure information of Chinese characters is also used in the evaluation process. The experimental results show that the proposed approach is effective. For recognition of Chinese characters written by a specific person, the recognition rate is about 96%. If the characters of the first three ranks are checked in counting the recognition rate, the rate rises to 99.6%.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 555
Author(s):  
Jui-Sheng Chou ◽  
Chia-Hsuan Liu

Sand theft or illegal mining in river dredging areas has been a problem in recent decades. For this reason, increasing the use of artificial intelligence in dredging areas, building automated monitoring systems, and reducing human involvement can effectively deter crime and lighten the workload of security guards. In this investigation, a smart dredging construction site system was developed using automated techniques that were arranged to be suitable to various areas. The aim in the initial period of the smart dredging construction was to automate the audit work at the control point, which manages trucks in river dredging areas. Images of dump trucks entering the control point were captured using monitoring equipment in the construction area. The obtained images and the deep learning technique, YOLOv3, were used to detect the positions of the vehicle license plates. Framed images of the vehicle license plates were captured and were used as input in an image classification model, C-CNN-L3, to identify the number of characters on the license plate. Based on the classification results, the images of the vehicle license plates were transmitted to a text recognition model, R-CNN-L3, that corresponded to the characters of the license plate. Finally, the models of each stage were integrated into a real-time truck license plate recognition (TLPR) system; the single character recognition rate was 97.59%, the overall recognition rate was 93.73%, and the speed was 0.3271 s/image. The TLPR system reduces the labor force and time spent to identify the license plates, effectively reducing the probability of crime and increasing the transparency, automation, and efficiency of the frontline personnel’s work. The TLPR is the first step toward an automated operation to manage trucks at the control point. The subsequent and ongoing development of system functions can advance dredging operations toward the goal of being a smart construction site. By intending to facilitate an intelligent and highly efficient management system of dredging-related departments by providing a vehicle LPR system, this paper forms a contribution to the current body of knowledge in the sense that it presents an objective approach for the TLPR system.


Sign in / Sign up

Export Citation Format

Share Document