scholarly journals Character Segmentation in Asian Collector's Seal Imprints: An Attempt to Retrieval Based on Ancient Character Typeface

2021 ◽  
Vol HistoInformatics (HistoInformatics) ◽  
Author(s):  
Kangying Li ◽  
Biligsaikhan Batjargal ◽  
Akira Maeda

Collector's seals provide important clues about the ownership of a book. They contain much information pertaining to the essential elements of ancient materials and also show the details of possession, its relation to the book, the identity of the collectors and their social status and wealth, amongst others. Asian collectors have typically used artistic ancient characters rather than modern ones to make their seals. In addition to the owner's name, several other words are used to express more profound meanings. A system that automatically recognizes these characters can help enthusiasts and professionals better understand the background information of these seals. However, there is a lack of training data and labelled images, as samples of some seals are scarce and most of them are degraded images. It is necessary to find new ways to make full use of such scarce data. While these data are available online, they do not contain information on the characters' position. The goal of this research is to assist in obtaining more labelled data through user interaction and provide retrieval tools that use only standard character typefaces extracted from font files. In this paper, a character segmentation method is proposed to predict the candidate characters' area without any labelled training data that contain character coordinate information. A retrieval-based recognition system that focuses on a single character is also proposed to support seal retrieval and matching. The experimental results demonstrate that the proposed character segmentation method performs well on Asian collector's seals, with 85% of the test data being correctly segmented.

2012 ◽  
Vol 546-547 ◽  
pp. 1345-1350
Author(s):  
Lian Huan Li

Character Segmentation is the key step for image text recognition. This paper presents a text tilt correction algorithm using tracked characteristics rectangle contour to extract angle, using line scan method based on the number of transitions to determine the character on the bottom. In order to meet the requirements of real-time and reliability, takes improved secondary single-character segmentation algorithm based on vertical projection method.


2004 ◽  
Vol 43 (05) ◽  
pp. 171-176 ◽  
Author(s):  
T. Behr ◽  
F. Grünwald ◽  
W. H. Knapp ◽  
L. Trümper ◽  
C. von Schilling ◽  
...  

Summary:This guideline is a prerequisite for the quality management in the treatment of non-Hodgkin-lymphomas using radioimmunotherapy. It is based on an interdisciplinary consensus and contains background information and definitions as well as specified indications and detailed contraindications of treatment. Essential topics are the requirements for institutions performing the therapy. For instance, presence of an expert for medical physics, intense cooperation with all colleagues committed to treatment of lymphomas, and a certificate of instruction in radiochemical labelling and quality control are required. Furthermore, it is specified which patient data have to be available prior to performance of therapy and how the treatment has to be carried out technically. Here, quality control and documentation of labelling are of greatest importance. After treatment, clinical quality control is mandatory (work-up of therapy data and follow-up of patients). Essential elements of follow-up are specified in detail. The complete treatment inclusive after-care has to be realised in close cooperation with those colleagues (haematology-oncology) who propose, in general, radioimmunotherapy under consideration of the development of the disease.


Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


Informatics ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 38 ◽  
Author(s):  
Martin Jänicke ◽  
Bernhard Sick ◽  
Sven Tomforde

Personal wearables such as smartphones or smartwatches are increasingly utilized in everyday life. Frequently, activity recognition is performed on these devices to estimate the current user status and trigger automated actions according to the user’s needs. In this article, we focus on the creation of a self-adaptive activity recognition system based on IMU that includes new sensors during runtime. Starting with a classifier based on GMM, the density model is adapted to new sensor data fully autonomously by issuing the marginalization property of normal distributions. To create a classifier from that, label inference is done, either based on the initial classifier or based on the training data. For evaluation, we used more than 10 h of annotated activity data from the publicly available PAMAP2 benchmark dataset. Using the data, we showed the feasibility of our approach and performed 9720 experiments, to get resilient numbers. One approach performed reasonably well, leading to a system improvement on average, with an increase in the F-score of 0.0053, while the other one shows clear drawbacks due to a high loss of information during label inference. Furthermore, a comparison with state of the art techniques shows the necessity for further experiments in this area.


2022 ◽  
Vol 12 (2) ◽  
pp. 853
Author(s):  
Cheng-Jian Lin ◽  
Yu-Cheng Liu ◽  
Chin-Ling Lee

In this study, an automatic receipt recognition system (ARRS) is developed. First, a receipt is scanned for conversion into a high-resolution image. Receipt characters are automatically placed into two categories according to the receipt characteristics: printed and handwritten characters. Images of receipts with these characters are preprocessed separately. For handwritten characters, template matching and the fixed features of the receipts are used for text positioning, and projection is applied for character segmentation. Finally, a convolutional neural network is used for character recognition. For printed characters, a modified You Only Look Once (version 4) model (YOLOv4-s) executes precise text positioning and character recognition. The proposed YOLOv4-s model reduces downsampling, thereby enhancing small-object recognition. Finally, the system produces recognition results in a tax declaration format, which can upload to a tax declaration system. Experimental results revealed that the recognition accuracy of the proposed system was 80.93% for handwritten characters. Moreover, the YOLOv4-s model had a 99.39% accuracy rate for printed characters; only 33 characters were misjudged. The recognition accuracy of the YOLOv4-s model was higher than that of the traditional YOLOv4 model by 20.57%. Therefore, the proposed ARRS can considerably improve the efficiency of tax declaration, reduce labor costs, and simplify operating procedures.


Author(s):  
Yue Li ◽  
Wei Wang

Artificial intelligent (AI) driving is an emerging technology, freeing the driver from driving. Some techniques for automatically driving have been developed; however, most can only recognize the traffic signs in particular groups, such as triangle signs for warning, circle signs for prohibition, and so forth, but cannot tell the exact meaning of every sign. In this paper, a framework for a traffic system recognition system is proposed. This system consists of two phases. The segmentation method, fuzzy c-means (FCM), is used to detect the traffic sign, whereas the Content-Based Image Retrieval (CBIR) method is used to match traffic signs to those in a database to find the exact meaning of every detected sign.


2018 ◽  
Vol 7 (2.5) ◽  
pp. 77
Author(s):  
Anis Farihan Mat Raffei ◽  
Rohayanti Hassan ◽  
Shahreen Kasim ◽  
Hishamudin Asmuni ◽  
Asraful Syifaa’ Ahmad ◽  
...  

The quality of eye image data become degraded particularly when the image is taken in the non-cooperative acquisition environment such as under visible wavelength illumination. Consequently, this environmental condition may lead to noisy eye images, incorrect localization of limbic and pupillary boundaries and eventually degrade the performance of iris recognition system. Hence, this study has compared several segmentation methods to address the abovementioned issues. The results show that Circular Hough transform method is the best segmentation method with the best overall accuracy, error rate and decidability index that more tolerant to ‘noise’ such as reflection.  


Sign in / Sign up

Export Citation Format

Share Document