scholarly journals Two-step skeletization of binary images based on the Zhang-Suen model and the producing mask

Author(s):  
J., Ma ◽  
V. Yu. Tsviatkou ◽  
V. K. Kanapelka

The aim of the work is to limit excessive thinning and increase the resistance to contour noise of skeletons resulted from arbitrary binary image shape while maintaining a high skeletonization rate. The skeleton is a set of thin lines, the relative position, the size and shape, which conveys information of size, shape and orientation in space of the corresponding homogeneous region of the image. To ensure resistance to contour noise, skeletonization algorithms are built on the basis of several steps. Zhang-Suen algorithm is widely known by high-quality skeletons and average performance, which disadvantages are the blurring of diagonal lines with a thickness of 2 pixels and the totally disappear patterns of 2x2 pixels. To overcome them, a mathematical model that compensates the Zhang-Suen algorithm has proposed in this paper, along with a producing mask and two logical conditions for evaluating its elements.

Informatics ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 25-35
Author(s):  
J. Ma ◽  
V. Yu. Tsviatkou ◽  
V. K. Kanapelka

This paper is focused on the field of the skeletonization of the binary image. Skeletonization makes it possible to represent a binary image in the form of many thin lines, the relative position, sizes and shape of which adequately describe the size, shape and orientation in space of the corresponding image areas. Skeletonization has many variety methods. Iterative parallel algorithms provide high quality skeletons. They can be implemented using one or more sub-iterations. In each iteration, redundant pixels, the neighborhoods of which meet certain conditions, are removed layer by layer along the contour and finally they leave only the skeleton. Many one-sub-iterations algorithms are characterized by a breakdown in connectivity and the formation of excess skeleton fragments. The highest-quality skeletons are formed by the well-known single-iteration OPTA algorithm, which based on 18 binary masks, but it is sensitive to contour noise and has a high computational complexity. The Zhang and Suen two-iteration algorithm (ZS), which is based on 6 logical conditions, is widely used due to its relative simplicity. But it suffers from the problem of the blurs of the diagonal lines with a thickness of 2 pixels and the lost of the square which size is 2×2 pixels. Besides, both algorithms mentioned above do not achieve the unit pixel thickness of the skeleton lines (many non-node pixels have more than two neighbors). Mathematical model and OPCA (One-Pass Combination Algorithm) algorithm which is based on a combination and simplification of single-iterative OPTA and two-iterative ZS are proposed for constructing extremely thin bound skeletons of binary images with low computational complexity. These model and algorithm also made it possible to accelerate the speed of skeletonization, to enhance recoverability of the original image on the skeleton and to reduce the redundancy of the bonds of the skeleton elements.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5235
Author(s):  
Jiri Nemecek ◽  
Martin Polasek

Among other things, passive methods based on the processing of images of feature points or beacons captured by an image sensor are used to measure the relative position of objects. At least two cameras usually have to be used to obtain the required information, or the cameras are combined with other sensors working on different physical principles. This paper describes the principle of passively measuring three position coordinates of an optical beacon using a simultaneous method and presents the results of corresponding experimental tests. The beacon is represented by an artificial geometric structure, consisting of several semiconductor light sources. The sources are suitably arranged to allow, all from one camera, passive measurement of the distance, two position angles, the azimuth, and the beacon elevation. The mathematical model of this method consists of working equations containing measured coordinates, geometric parameters of the beacon, and geometric parameters of the beacon image captured by the camera. All the results of these experimental tests are presented.


2012 ◽  
Vol 576 ◽  
pp. 41-45
Author(s):  
A.K.M. Nurul Amin ◽  
M.A. Mahmud ◽  
M.D. Arif

The majority of semiconductor devices are made up of silicon wafers. Manufacturing of high-quality silicon wafers includes numerous machining processes, including end milling. In order to end mill silicon to a nano-meteric surface finish, it is crucial to determine the effect of machining parameters, which influence the machining transition from brittle to ductile mode. Thus, this paper presents a novel experimental technique to study the effects of machining parameters in high speed end milling of silicon. The application of compressed air, in order to blow away the chips formed, is also investigated. The machining parameters’ ranges which facilitate the transition from brittle to ductile mode cutting as well as enable the attainment of high quality surface finish and integrity are identified. Mathematical model of the response parameter, the average surface roughness (Ra) is subsequently developed using RSM in terms of the machining parameters. The model was determined, by Analysis of Variance (ANOVA), to have a confidence level of 95%. The experimental results show that the developed mathematical model can effectively describe the performance indicators within the controlled limits of the factors that are being considered.


2021 ◽  
pp. 107-120
Author(s):  
Viktor Medennikov

The article substantiates the need to re-evaluate the role of human capital in the development of society in the digital age. Since high-quality education is the main direction of the formation of human capital in any country, the importance of creating an information space for scientific and educational institutions is demonstrated. A methodology for assessing the level of human capital on the basis of information scientific and educational resources is proposed. The author presents results of calculations obtained by this method on the example of agricultural educational institutions and a mathematical model for assessing the impact of human capital on the socio-economic situation of the regions.


Author(s):  
Prabhakar Telagarapu ◽  
B. Jagdishwar Rao ◽  
J. Venkata Suman ◽  
K. Chiranjeevi

The objective of this paper is to visualize and analyze video.Videos are sequence of image frames. In this work, algorithm will be developed to analyze a frame and the same will be applied to all frames in a video. It is expected see unwanted objects in video frame, which can be removed by converting colour frames into a gray scale and implement thresh holding algorithm on an image. Threshold can be set depending on the object to be detected. Gray scale image will be converted to binary during thresh holding process. To reduce noise, to improve the robustness of the system, and to reduce the error rate in detection and tracking process, morphological image processing method for binary images is used. Morphological processing will be applied on binary image to remove small unwanted objects that are presented in a frame. A developed blob analysis technique for extracted binary image facilitates pedestrian and car detection. Processing blob’s information of relative size and location leads to distinguishing between pedestrian and car. The threshold, morphological and blobs process is applied to all frames in a video and finally original video with tagged cars will be displayed.


Author(s):  
Saif alZahir ◽  
Syed M. Naqvi

In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and 1-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.


2005 ◽  
Vol 69 (4) ◽  
pp. 387-397 ◽  
Author(s):  
Massimo Migliori ◽  
Domenico Gabriele ◽  
Bruno de Cindio ◽  
Claudio M. Pollini

Author(s):  
SATOSHI SUZUKI ◽  
NAONORI UEDA ◽  
JACK SKLANSKY

A thinning method for binary images is proposed which converts digital binary images into line patterns. The proposed method suppresses shape distortion as well as false feature points, thereby producing more natural line patterns than existing methods. In addition, this method guarantees that the produced line patterns are one pixel in width everywhere. In this method, an input binary image is transformed into a graph in which 1-pixels correspond to nodes and neighboring nodes are connected by edges. Next, nodes unnecessary for preserving the topology of the input image and the edges connecting them are deleted symmetrically. Then, edges that do not contribute to the preservation of the topology of the input image are deleted. The advantages of this graph-based thinning method are confirmed by applying it to ideal line patterns and geographical maps.


2005 ◽  
Vol 05 (01) ◽  
pp. 67-87 ◽  
Author(s):  
HAIPING LU ◽  
YUN Q. SHI ◽  
ALEX C. KOT ◽  
LIHUI CHEN

Digital watermarking has been proposed for the protection of digital medias. This paper presents two watermarking algorithms for binary images. Both algorithms involve a blurring preprocessing and a biased binarization. After the blurring, the first algorithm embeds a watermark by modifying the DC components of the Discrete Cosine Transform (DCT), followed by a biased binarization, and the second one embeds a watermark by directly biasing the binarization threshold of the blurred image, controlled by a loop. Experimental results show the imperceptibility and robustness aspects of both algorithms.


2011 ◽  
Vol 103 ◽  
pp. 658-666
Author(s):  
Hideaki Kawano ◽  
Hideaki Orii ◽  
Hiroshi Maeda

In this paper, a method which specifies the signboard region and extracts the charactersinside the signboard is proposed.We usually take notes not to forget what we should leave to memory.But it is often that the task is too troublesome. Our aim is the development of a new input-interface soas to input texts froma picture.Most of signboards are composed of almostmonochromatic region. Onthe basis of this observation, image segmentation using color information is applied, and then we getsome binary images by applying threshold for each segmented region. Each binary image is enclosedby the smallest circumscribed square. The signboard region is specified according to distribution andarea of the white pixels inside the square. As a result of experiment, we confirmed the effectivenessof the proposed method.


Sign in / Sign up

Export Citation Format

Share Document