scholarly journals Writer Identification in Old Handwritten Music Scores

Author(s):  
Alicia Fornés ◽  
Josep Lladós ◽  
Gemma Sánchez ◽  
Horst Bunke

Writer identification in handwritten text documents is an active area of study, whereas the identification of the writer of graphical documents is still a challenge. The main objective of this work is the identification of the writer in old music scores, as an example of graphic documents. The writer identification framework proposed combines three different writer identification approaches. The first one is based on the use of two symbol recognition methods, robust in front of hand-drawn distortions. The second one generates music lines and extracts information about the slant, width of the writing, connected components, contours and fractals. The third approach generates music texture images and computes textural features. The high identification rates obtained demonstrate the suitability of the proposed ensemble architecture. To the best of our knowledge, this work is the first contribution on writer identification from images containing graphical languages.

Segmentation is division of something into smaller parts and one of the Component of character recognition system. Separation of characters, words and lines are done in Segmentation from text documents. character recognition is a process which allows computers to recognize written or printed characters such as numbers or letters and to change them into a form that the computer can use. the accuracy of OCR system is done by taking the output of an OCR run for an image and comparing it to the original version of the same text. The main aim of this paper is to find out the various text line segmentations are Projection profiles, Weighted Bucket Method. Proposed method is horizontal projection profile and connected component method on Handwritten Kannada language. These methods are used for experimentation and finally comparing their accuracy and results.


Author(s):  
S. S. Popov ◽  
G. N. Shilova ◽  
A. O. Khotylev

The report presents the results of comprehensive studies of loess-like formations that are common within drainage basins of Ay and Yuruzan rivers (South Urals). Loess complexes associated with the third fluvial terrace. The lithological composition, structural and textural features indicate that the loess were formed like the part of alluvial fans, planned under the third fluvial terrace. The obtained palynological data indicate the formation of deposits in the Middle Pleistocene during the Odintsovo interglacial and Moscow glaciation.


2022 ◽  
pp. 226-239
Author(s):  
Onur Ugurlu ◽  
Nusin Akram ◽  
Vahid Khalilpour Akram

The new generation of fast, small, and energy-efficient devices that can connect to the internet are already used for different purposes in healthcare, smart homes, smart cities, industrial automation, and entertainment. One of the main requirements in all kinds of cyber-physical systems is a reliable communication platform. In a wired or wireless network, losing some special nodes may disconnect the communication paths between other nodes. Generally, these nodes, which are called critical nodes, have many undesired effects on the network. The authors focus on three different problems. The first problem is finding the nodes whose removal minimizes the pairwise connectivity in the residual network. The second problem is finding the nodes whose removal maximizes the number of connected components. Finally, the third problem is finding the nodes whose removal minimizes the size of the largest connected component. All three problems are NP-Complete, and the authors provide a brief survey about the existing approximated algorithms for these problems.


2019 ◽  
Vol 9 (19) ◽  
pp. 4186
Author(s):  
Xueding Wang ◽  
Xinmai Yang ◽  
Xose Luis Dean-Ben

Biomedical photoacoustic (or optoacoustic) tomography (PAT), or more generally, photoacoustic imaging (PAI), has been an active area of study and development in the last two decades [...]


2020 ◽  
Vol 13 (2) ◽  
pp. 155-194
Author(s):  
Shalini Puri ◽  
Satya Prakash Singh

This article proposes a bi-leveled image classification system to classify printed and handwritten English documents into mutually exclusive predefined categories. The proposed system follows the steps of preprocessing, segmentation, feature extraction, and SVM based character classification at level 1, and word association and fuzzy matching based document classification at level 2. The system architecture and its modular structure discuss various task stages and their functionalities. Further, a case study on document classification is discussed to show the internal score computations of words and keywords with fuzzy matching. The experiments on proposed system illustrate that the system achieves promising results in the time-efficient manner and achieves better accuracy with less computation time for printed documents than handwritten ones. Finally, the performance of the proposed system is compared with the existing systems and it is observed that proposed system performs better than many other systems.


2019 ◽  
Vol 66 (264) ◽  
pp. 842
Author(s):  
Leo Pessini

Na primeira parte deste artigo o autor realça o pioneirismo de Van Rensselaer Potter e faz uma apreciação de sua obra a partir de dois de seus discípulos, Gerald M. Lower e Peter J. Whitehouse. Na segunda, apresenta o desenvolvimento da bioetica a partir das três edições da Encyclopedia of Bioethics e, a partir da última edição (2004), prospecta alguns desafios atuais. Finalmente, na terceira e última parte, resume os últimos desdobramentos a partir dos congressos mundiais de Sidney (2004) e Pequim (2006).Abstract: In the first part of this article the author highlights Van Rensselaer Potter’s pioneering spirit and evaluates his work through the work of his two disciples, Gerald M. Lower and Peter J. Whitehouse. In the second part, he presents the development of bioethics as discussed in the three editions of Potter’s essential work, the Encyclopaedia of Bioethics (1978, 1995 and 2004) and on the basis of its most recent edition, and in cooperation with its chief-editor, Stephen Post, he explores some of the current challenges in this area of study. Finally, in the third and last part, the latest developments since the world congresses of Sidney (2004) and Beijing (2006).


2021 ◽  
Vol 8 ◽  
Author(s):  
Phillip Quin ◽  
Dac Dang Khoa Nguyen ◽  
Thanh Long Vu ◽  
Alen Alempijevic ◽  
Gavin Paul

Many robot exploration algorithms that are used to explore office, home, or outdoor environments, rely on the concept of frontier cells. Frontier cells define the border between known and unknown space. Frontier-based exploration is the process of repeatedly detecting frontiers and moving towards them, until there are no more frontiers and therefore no more unknown regions. The faster frontier cells can be detected, the more efficient exploration becomes. This paper proposes several algorithms for detecting frontiers. The first is called Naïve Active Area (NaïveAA) frontier detection and achieves frontier detection in constant time by only evaluating the cells in the active area defined by scans taken. The second algorithm is called Expanding-Wavefront Frontier Detection (EWFD) and uses frontiers from the previous timestep as a starting point for searching for frontiers in newly discovered space. The third approach is called Frontier-Tracing Frontier Detection (FTFD) and also uses the frontiers from the previous timestep as well as the endpoints of the scan, to determine the frontiers at the current timestep. Algorithms are compared to state-of-the-art algorithms such as Naïve, WFD, and WFD-INC. NaïveAA is shown to operate in constant time and therefore is suitable as a basic benchmark for frontier detection algorithms. EWFD and FTFD are found to be significantly faster than other algorithms.


2016 ◽  
Vol 2 ◽  
pp. e39 ◽  
Author(s):  
Ofer Biller ◽  
Irina Rabaev ◽  
Klara Kedem ◽  
Its’hak Dinstein ◽  
Jihad J. El-Sana

Common tasks in document analysis, such as binarization, line extraction etc., are still considered difficult for highly degraded text documents. Having reliable fundamental information regarding the characters of the document, such as the distribution of character dimensions and stroke width, can significantly improve the performance of these tasks. We introduce a novel perspective of the image data which maps the evolution of connected components along the change in gray scale threshold. The maps reveal significant information about the sets of elements in the document, such as characters, noise, stains, and words. The information is further employed to improve state of the art binarization algorithm, and achieve automatically character size estimation, line extraction, stroke width estimation, and feature distribution analysis, all of which are hard tasks for highly degraded documents.


Sign in / Sign up

Export Citation Format

Share Document