character recognition
Recently Published Documents


TOTAL DOCUMENTS

5191
(FIVE YEARS 1255)

H-INDEX

64
(FIVE YEARS 7)

Author(s):  
Karanrat Thammarak ◽  
Prateep Kongkla ◽  
Yaowarat Sirisathitkul ◽  
Sarun Intakosum

Optical character recognition (OCR) is a technology to digitize a paper-based document to digital form. This research studies the extraction of the characters from a Thai vehicle registration certificate via a Google Cloud Vision API and a Tesseract OCR. The recognition performance of both OCR APIs is also examined. The 84 color image files comprised three image sizes/resolutions and five image characteristics. For suitable image type comparison, the greyscale and binary image are converted from color images. Furthermore, the three pre-processing techniques, sharpening, contrast adjustment, and brightness adjustment, are also applied to enhance the quality of image before applying the two OCR APIs. The recognition performance was evaluated in terms of accuracy and readability. The results showed that the Google Cloud Vision API works well for the Thai vehicle registration certificate with an accuracy of 84.43%, whereas the Tesseract OCR showed an accuracy of 47.02%. The highest accuracy came from the color image with 1024×768 px, 300dpi, and using sharpening and brightness adjustment as pre-processing techniques. In terms of readability, the Google Cloud Vision API has more readability than the Tesseract. The proposed conditions facilitate the possibility of the implementation for Thai vehicle registration certificate recognition system.


Author(s):  
Rifiana Arief ◽  
Achmad Benny Mutiara ◽  
Tubagus Maulana Kusuma ◽  
Hustinawaty Hustinawaty

<p>This research proposed automated hierarchical classification of scanned documents with characteristics content that have unstructured text and special patterns (specific and short strings) using convolutional neural network (CNN) and regular expression method (REM). The research data using digital correspondence documents with format PDF images from pusat data teknologi dan informasi (technology and information data center). The document hierarchy covers type of letter, type of manuscript letter, origin of letter and subject of letter. The research method consists of preprocessing, classification, and storage to database. Preprocessing covers extraction using Tesseract optical character recognition (OCR) and formation of word document vector with Word2Vec. Hierarchical classification uses CNN to classify 5 types of letters and regular expression to classify 4 types of manuscript letter, 15 origins of letter and 25 subjects of letter. The classified documents are stored in the Hive database in Hadoop big data architecture. The amount of data used is 5200 documents, consisting of 4000 for training, 1000 for testing and 200 for classification prediction documents. The trial result of 200 new documents is 188 documents correctly classified and 12 documents incorrectly classified. The accuracy of automated hierarchical classification is 94%. Next, the search of classified scanned documents based on content can be developed.</p>


Author(s):  
N. Shobha Rani ◽  
Manohar N. ◽  
Hariprasad M. ◽  
Pushpa B. R.

<p>Automated reading of handwritten Kannada documents is highly challenging due to the presence of vowels, consonants and its modifiers. The variable nature of handwriting styles aggravates the complexity of machine based reading of handwritten vowels and consonants. In this paper, our investigation is inclined towards design of a deep convolution network with capsule and routing layers to efficiently recognize  Kannada handwritten characters.  Capsule network architecture is built of an input layer,  two convolution layers, primary capsule, routing capsule layers followed by tri-level dense convolution layer and an output layer.  For experimentation, datasets are collected from more than 100 users for creation of training data samples of about 7769 comprising of 49 classes. Test samples of all the 49 classes are again collected separately from 3 to 5 users creating a total of 245 samples for novel patterns. It is inferred from performance evaluation; a loss of 0.66% is obtained in the classification process and for 43 classes precision of 100% is achieved with an accuracy of 99%. An average accuracy of 95% is achieved for all remaining 6 classes with an average precision of 89%.</p>


Author(s):  
Kannuru Padmaja

Abstract: In this paper, we present the implementation of Devanagari handwritten character recognition using deep learning. Hand written character recognition gaining more importance due to its major contribution in automation system. Devanagari script is one of various languages script in India. It consists of 12 vowels and 36 consonants. Here we implemented the deep learning model to recognize the characters. The character recognition mainly five steps: pre-processing, segmentation, feature extraction, prediction, post-processing. The model will use convolutional neural network to train the model and image processing techniques to use the character recognition and predict the accuracy of rcognition. Keywords: convolutional neural network, character recognition, Devanagari script, deep learning.


Author(s):  
Rajat Subhra Bhowmick ◽  
Isha Ganguli ◽  
Jayanta Paul ◽  
Jaya Sil

In today’s era of digitization, social media platforms play a significant role in networking and influencing the perception of the general population. Social network sites have recently been used to carry out harmful attacks against individuals, including political and theological figures, intellectuals, sports and movie stars, and other prominent dignitaries, which may or may not be intentional. However, the exchange of such information across the general population inevitably contributes to social-economic, socio-political turmoil, and even physical violence in society. By classifying the derogatory content of a social media post, this research work helps to eradicate and discourage the upsetting propagation of such hate campaigns. Social networking posts today often include the picture of Memes along with textual remarks and comments, which throw new challenges and opportunities to the research community while identifying the attacks. This article proposes a multimodal deep learning framework by utilizing ensembles of computer vision and natural language processing techniques to train an encapsulated transformer network for handling the classification problem. The proposed framework utilizes the fine-tuned state-of-the-art deep learning-based models (e.g., BERT, Electra) for multilingual text analysis along with face recognition and the optical character recognition model for Meme picture comprehension. For the study, a new Facebook meme-post dataset is created with recorded baseline results. The subject of the created dataset and context of the work is more geared toward multilingual Indian society. The findings demonstrate the efficacy of the proposed method in the identification of social media meme posts featuring derogatory content about a famous/recognized individual.


2022 ◽  
Vol 12 (2) ◽  
pp. 853
Author(s):  
Cheng-Jian Lin ◽  
Yu-Cheng Liu ◽  
Chin-Ling Lee

In this study, an automatic receipt recognition system (ARRS) is developed. First, a receipt is scanned for conversion into a high-resolution image. Receipt characters are automatically placed into two categories according to the receipt characteristics: printed and handwritten characters. Images of receipts with these characters are preprocessed separately. For handwritten characters, template matching and the fixed features of the receipts are used for text positioning, and projection is applied for character segmentation. Finally, a convolutional neural network is used for character recognition. For printed characters, a modified You Only Look Once (version 4) model (YOLOv4-s) executes precise text positioning and character recognition. The proposed YOLOv4-s model reduces downsampling, thereby enhancing small-object recognition. Finally, the system produces recognition results in a tax declaration format, which can upload to a tax declaration system. Experimental results revealed that the recognition accuracy of the proposed system was 80.93% for handwritten characters. Moreover, the YOLOv4-s model had a 99.39% accuracy rate for printed characters; only 33 characters were misjudged. The recognition accuracy of the YOLOv4-s model was higher than that of the traditional YOLOv4 model by 20.57%. Therefore, the proposed ARRS can considerably improve the efficiency of tax declaration, reduce labor costs, and simplify operating procedures.


2022 ◽  
Vol 16 (1) ◽  
pp. 54
Author(s):  
Imam Husni Al amin ◽  
Awan Aprilino

Currently, vehicle number plate detection systems in general still use the manual method. This will take a lot of time and human effort. Thus, an automatic vehicle number plate detection system is needed because the number of vehicles that continues to increase will burden human labor. In addition, the methods used for vehicle number plate detection still have low accuracy because they depend on the characteristics of the object being used. This study develops a YOLO-based automatic vehicle number plate detection system. The dataset used is a pretrained YOLOv3 model of 700 data. Then proceed with the number plate text extraction process using the Tesseract Optical Character Recognition (OCR) library and the results obtained will be stored in the database. This system is web-based and API so that it can be used online and on the cross-platform. The test results show that the automatic number plate detection system reaches 100% accuracy with sufficient lighting and a threshold of 0.5 and for the results using the Tesseract library, the detection results are 92.32% where the system is successful in recognizing all characters on the license plates of cars and motorcycles. in the form of Alphanumeric characters of 7-8 characters.


2022 ◽  
Vol 11 (1) ◽  
pp. 45
Author(s):  
Xuanming Fu ◽  
Zhengfeng Yang ◽  
Zhenbing Zeng ◽  
Yidan Zhang ◽  
Qianting Zhou

Deep learning techniques have been successfully applied in handwriting recognition. Oracle bone inscriptions (OBI) are the earliest hieroglyphs in China and valuable resources for studying the etymology of Chinese characters. OBI are of important historical and cultural value in China; thus, textual research surrounding the characters of OBI is a huge challenge for archaeologists. In this work, we built a dataset named OBI-100, which contains 100 classes of oracle bone inscriptions collected from two OBI dictionaries. The dataset includes more than 128,000 character samples related to the natural environment, humans, animals, plants, etc. In addition, we propose improved models based on three typical deep convolutional network structures to recognize the OBI-100 dataset. By modifying the parameters, adjusting the network structures, and adopting optimization strategies, we demonstrate experimentally that these models perform fairly well in OBI recognition. For the 100-category OBI classification task, the optimal model achieves an accuracy of 99.5%, which shows competitive performance compared with other state-of-the-art approaches. We hope that this work can provide a valuable tool for character recognition of OBI.


2022 ◽  
Vol 20 (8) ◽  
pp. 3080
Author(s):  
A. A. Komkov ◽  
V. P. Mazaev ◽  
S. V. Ryazanova ◽  
D. N. Samochatov ◽  
E. V. Koshkina ◽  
...  

RuPatient health information system (HIS) is a computer program consisting of a doctor-patient web user interface, which includes algorithms for recognizing medical record text and entering it into the corresponding fields of the system.Aim. To evaluate the effectiveness of RuPatient HIS in actual clinical practice.Material and methods. The study involved 10 cardiologists and intensivists of the department of cardiology and сardiovascular intensive care unit of the L. A. Vorokhobov City Clinical Hospital 67 We analyzed images (scanned copies, photos) of discharge reports from patients admitted to the relevant departments in 2021. The following fields of medical documentation was recognized: Name, Complaints, Anamnesis of life and illness, Examination, Recommendations. The correctness and accuracy of recognition of entered information were analyzed. We compared the recognition quality of RuPatient HIS and a popular optical character recognition application (FineReader for Mac).Results. The study included 77 pages of discharge reports of patients from various hospitals in Russia from 50 patients (men, 52%). The mean age of patients was 57,7±7,9 years. The number of reports with correctly recognized fields in various categories using the program algorithms was distributed as follows: Name — 14 (28%), Diagnosis — 13 (26%), Complaints — 40 (80%), Anamnesis — 14 (28%), Examination — 24 (48%), Recommendations — 46 (92%). Data that was not included in the category was also recognized and entered in the comments field. The number of recognized words was 549±174,9 vs 522,4±215,6 (p=0,5), critical errors in words — 2,1±1,6 vs 4,4±2,8 (p<0,001), non-critical errors — 10,3±4,3 vs 5,6±3,3 (p<0,001) for RuPatient HIS and optical character recognition application for a personal computer, respectively.Conclusion. The developed RuPatient HIS, which includes a module for recognizing medical records and entering data into the corresponding fields, significantly increases the document management efficiency with high quality of optical character recognition based on neural network technologies and the automation of filling process.


Author(s):  
Armand Christopher Luna ◽  
Christian Trajano ◽  
John Paul So ◽  
Nicole John Pascua ◽  
Abraham Magpantay ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document