scholarly journals Basic Research on Ancient Bai Character Recognition Based on Mobile APP

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zeqing Zhang ◽  
Cuihua Lee ◽  
Zuodong Gao ◽  
Xiaofan Li

Bai nationality has a long history and has its own language. Limited by the fact that there are fewer and fewer people who know the Bai language, the literature and culture of the Bai nationality begin to lose rapidly. In order to make the people who do not understand Bai characters can also read the ancient books of Bai nationality, this paper is based on the research of high-precision single character recognition model of Bai characters. First, with the help of Bai culture lovers and related scholars, we have constructed a data set of Bai characters, but limited by the need of expert knowledge, so the data set is limited in size. As a result, deep learning models with the nature of data hunger cannot get an ideal accuracy. In order to solve this issue, we propose to use the Chinese data set which also belongs to Sino-Tibetan language family to improve the recognition accuracy of Bai characters through transfer learning. In addition, we propose four transfer learning approaches: Direct Knowledge Transfer (DKT), Indirect Knowledge Transfer (IKT), Self-coding Knowledge Transfer (SCKT), and Self-supervised Knowledge Transfer (SSKT). Experiments show that our approaches greatly improve the recognition accuracy of Bai characters.

Author(s):  
Prosenjit Mukherjee ◽  
Shibaprasad Sen ◽  
Kaushik Roy ◽  
Ram Sarkar

This paper explores the domain of online handwritten Bangla character recognition by stroke-based approach. The component strokes of a character sample are recognized firstly and then characters are constructed from the recognized strokes. In the current experiment, strokes are recognized by both supervised and unsupervised approaches. To estimate the features, images of all the component strokes are superimposed. A mean structure has been generated from this superimposed image. Euclidian distances between pixel points of a stroke sample and mean stroke structure are considered as features. For unsupervised approach, K-means clustering algorithm has been used whereas six popular classifiers have been used for supervised approach. The proposed feature vector has been evaluated on 10,000-character database and achieved 90.69% and 97.22% stroke recognition accuracy in unsupervised (using K-means clustering) and supervised way (using MLP [multilayer perceptron] classifier). This paper also discusses about merit and demerits of unsupervised and supervised classification approaches.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4172
Author(s):  
Frank Loh ◽  
Fabian Poignée ◽  
Florian Wamser ◽  
Ferdinand Leidinger ◽  
Tobias Hoßfeld

Streaming video is responsible for the bulk of Internet traffic these days. For this reason, Internet providers and network operators try to make predictions and assessments about the streaming quality for an end user. Current monitoring solutions are based on a variety of different machine learning approaches. The challenge for providers and operators nowadays is that existing approaches require large amounts of data. In this work, the most relevant quality of experience metrics, i.e., the initial playback delay, the video streaming quality, video quality changes, and video rebuffering events, are examined using a voluminous data set of more than 13,000 YouTube video streaming runs that were collected with the native YouTube mobile app. Three Machine Learning models are developed and compared to estimate playback behavior based on uplink request information. The main focus has been on developing a lightweight approach using as few features and as little data as possible, while maintaining state-of-the-art performance.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142097836
Author(s):  
Cristian Vilar ◽  
Silvia Krug ◽  
Benny Thörnberg

3D object recognition has been a cutting-edge research topic since the popularization of depth cameras. These cameras enhance the perception of the environment and so are particularly suitable for autonomous robot navigation applications. Advanced deep learning approaches for 3D object recognition are based on complex algorithms and demand powerful hardware resources. However, autonomous robots and powered wheelchairs have limited resources, which affects the implementation of these algorithms for real-time performance. We propose to use instead a 3D voxel-based extension of the 2D histogram of oriented gradients (3DVHOG) as a handcrafted object descriptor for 3D object recognition in combination with a pose normalization method for rotational invariance and a supervised object classifier. The experimental goal is to reduce the overall complexity and the system hardware requirements, and thus enable a feasible real-time hardware implementation. This article compares the 3DVHOG object recognition rates with those of other 3D recognition approaches, using the ModelNet10 object data set as a reference. We analyze the recognition accuracy for 3DVHOG using a variety of voxel grid selections, different numbers of neurons ( Nh) in the single hidden layer feedforward neural network, and feature dimensionality reduction using principal component analysis. The experimental results show that the 3DVHOG descriptor achieves a recognition accuracy of 84.91% with a total processing time of 21.4 ms. Despite the lower recognition accuracy, this is close to the current state-of-the-art approaches for deep learning while enabling real-time performance.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Author(s):  
Wendy J. Schiller ◽  
Charles Stewart III

From 1789 to 1913, U.S. senators were not directly elected by the people—instead the Constitution mandated that they be chosen by state legislators. This radically changed in 1913, when the Seventeenth Amendment to the Constitution was ratified, giving the public a direct vote. This book investigates the electoral connections among constituents, state legislators, political parties, and U.S. senators during the age of indirect elections. The book finds that even though parties controlled the partisan affiliation of the winning candidate for Senate, they had much less control over the universe of candidates who competed for votes in Senate elections and the parties did not always succeed in resolving internal conflict among their rank and file. Party politics, money, and personal ambition dominated the election process, in a system originally designed to insulate the Senate from public pressure. The book uses an original data set of all the roll call votes cast by state legislators for U.S. senators from 1871 to 1913 and all state legislators who served during this time. Newspaper and biographical accounts uncover vivid stories of the political maneuvering, corruption, and partisanship—played out by elite political actors, from elected officials, to party machine bosses, to wealthy business owners—that dominated the indirect Senate elections process. The book raises important questions about the effectiveness of Constitutional reforms, such as the Seventeenth Amendment, that promised to produce a more responsive and accountable government.


2018 ◽  
pp. 169-180
Author(s):  
Nikolai A. Zhirov ◽  

On September, 21-23, the I.A. Bunin Yelets State University, supported by the Russian Foundation for Basic Research (RFFI), held an All-Russian scientific conference ‘In the time of change: Revolt, insurrection, and revolution in the Russian periphery in the 17th – early 20th centuries’. Scientists from various Russian regions participated in its work. The conference organizers focused on social conflicts in the Russian periphery. The first series of reports addressed the Age of Rebellions in the Russian history. They considered the role and the place of the service class people in anti-government revolts. Some scientists stressed the effect of official state policy on the revolutionary mood of the people. Some reports paid attention to jurisdictions and activities of the general police in the 19th – early 20th century and those of the Provisional Government militia. Other reports analyzed the participation of persons of non-peasant origin in the revolutionary events. They studied the effect of the revolutionary events on the mood and behavior of local people and the ways of solving conflicts between the authorities and the society. Most numerous series of reports were devoted to social conflicts in the Russian village at the turn of the 20th century, studied forms and ways of peasants' struggle against the extortionate cost of the emancipation, and offered a periodization of peasants' uprisings. The researchers stressed that peasants remained politically unmotivated; analysis of their relations with authorities shows that they were predominantly conservative and not prone to incitement to against monarchy. Some questions of source studies and methodology of studying the revolution and the preceding period were raised. Most researches used interdisciplinary methods, popular in modern humanities and historical science.


2018 ◽  
Author(s):  
Peter De Wolf ◽  
Zhuangqun Huang ◽  
Bede Pittenger

Abstract Methods are available to measure conductivity, charge, surface potential, carrier density, piezo-electric and other electrical properties with nanometer scale resolution. One of these methods, scanning microwave impedance microscopy (sMIM), has gained interest due to its capability to measure the full impedance (capacitance and resistive part) with high sensitivity and high spatial resolution. This paper introduces a novel data-cube approach that combines sMIM imaging and sMIM point spectroscopy, producing an integrated and complete 3D data set. This approach replaces the subjective approach of guessing locations of interest (for single point spectroscopy) with a big data approach resulting in higher dimensional data that can be sliced along any axis or plane and is conducive to principal component analysis or other machine learning approaches to data reduction. The data-cube approach is also applicable to other AFM-based electrical characterization modes.


2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


2020 ◽  
Vol 4 (4) ◽  
pp. 281-290
Author(s):  
Tingzhu Chen ◽  
Yaoyao Qian ◽  
Jingyu Pei ◽  
Shaoteng Wu ◽  
Jiang Wu ◽  
...  

Oracle bone script recognition (OBSR) has been a fundamental problem in research on oracle bone scripts for decades. Despite being intensively studied, existing OBSR methods are still subject to limitations regarding recognition accuracy, speed and robustness. Furthermore, the dependency of these methods on expert knowledge hinders the adoption of OBSR systems by the general public and also discourages social outreach of research outputs. Addressing these issues, this study proposes an encoding-based OBSR system that applies image pre-processing techniques to encode oracle images into small matrices and recognize oracle characters in the encoding space. We tested our methods on a collection of oracle bones from the Yin Ruins in XiaoTun village, and achieved a high accuracy rate of 99% within a time range of milliseconds.


Sign in / Sign up

Export Citation Format

Share Document