FAST IMPLEMENTATION OF MORPHOLOGICAL OPERATIONS USING BINARY IMAGE BLOCK DECOMPOSITION

2004 ◽  
Vol 04 (02) ◽  
pp. 183-202
Author(s):  
BASILIOS GATOS ◽  
STAVROS J. PERANTONIS ◽  
NIKOS PAPAMARKOS ◽  
IOANNIS ANDREADIS

Morphological transformations are commonly used to perform a variety of image processing tasks. However, morphological operations are time-consuming procedures since they involve ordering and min/max computation of numbers resulting from image interaction with structuring elements. This paper presents a new method that can be used to speed up basic morphological operations for binary images. To achieve this, the binary images are first decomposed in a set of non-overlapping rectangular blocks of foreground pixels that have predefined maximum dimensions. Then off-line dilation and erosion of all rectangular blocks are arbitrary obtained and stored into suitable look-up array tables. By using the look up tables, the results of the morphological operations to the rectangular blocks are directly obtained. Thus, first all image blocks are replaced by their look-up array tables. Then the morphological operations are applied only to the limited number of the remaining pixels. Experimental results reveal that starting from a block represented binary image morphological operations can be executed with different types of structuring elements in significantly less CPU time. Using the block representation, we are able to perform dilation 16 times faster than non-fast implementations and 10 times faster than an alternative fast implementation based on contour processing. Significant acceleration is also recorded when using this approach for repeated application of dilation (for 10 iterations, dilation using the block representation is over 20 times faster than non-fast implementations and over four times faster than using the fast contour based approach).

2021 ◽  
Author(s):  
Juvet Karnel Sadié ◽  
Stéphane Gael Raymond Ekodeck ◽  
Rene Ndoundam

Abstract We propose a steganographic scheme based on permutations, which improves the capacity of embedding information in a series of p host binary images. Given a host binary image block of size m x n bits and any embedding technique T, where T can hide Q(m, n) bits of data in the image; given p images, T can hide p x Q(m, n) bits of data in these images. Our scheme improves the capacity of embedding information in p images such that, instead of p x Q(m, n) bits, it can hide p x log2(p) + p x Q(m, n)bits. The results which have been obtained by experiments, show that our model performs a better hiding process in terms of hiding capacity.


2009 ◽  
Vol 45 (3) ◽  
pp. 533-567 ◽  
Author(s):  
INKIE CHUNG

This paper provides a Distributed Morphology analysis of the paradoxical interaction of the two cases of verbal suppletion in Korean, and argues that the two suppletion types are characterized by two different types of morphological operations. The two roots found with short-form negation and honorification suggest different morphological structures: [[Neg-V] Hon] for al- ‘know’, molu- ‘not.know’, a-si- ‘know-hon’, molu-si- (not *an(i) a-si-) ‘neg know-hon’; and [Neg [V-Hon]] for iss- ‘exist’, eps- ‘not.exist’, kyey-si- ‘exist-hon’, an(i) kyey-si- (not *eps-(u)-si-) ‘neg exist-hon’. Predicate repetition constructions support the [[Neg-V] Hon] structure. In this structure, however, the negative suppletion (analyzed as fusion of negation and the root) is blocked by the honorific suffix structurally more peripheral to the root. C-command is the only requirement for context allomorphy in Distributed Morphology (Halle & Marantz 1993). Since the [+hon] feature c-commands the root, the root can show honorific suppletive allomorphy in the first cycle with negation intervening between the root and [+hon]. Negation fusion occurs in the second cycle after vocabulary insertion of the root. Fusion, then, should refer to vocabulary items, not abstract features, and will be interleaved with vocabulary insertion. If the output of the root is /kyey/ due to the honorific feature, negative suppletion will not apply and the correct form an(i) kyey-si- will be derived. Therefore, both of the distinct morphological operations for suppletion, i.e., fusion and contextual allomorphy, are necessary. The revised formulation of fusion shows that certain morphological operations follow vocabulary insertion. This derivational approach to the suppletion interaction provides support for separation of phonological and nonphonological features and for late insertion of phonological features.


Author(s):  
Prabhakar Telagarapu ◽  
B. Jagdishwar Rao ◽  
J. Venkata Suman ◽  
K. Chiranjeevi

The objective of this paper is to visualize and analyze video.Videos are sequence of image frames. In this work, algorithm will be developed to analyze a frame and the same will be applied to all frames in a video. It is expected see unwanted objects in video frame, which can be removed by converting colour frames into a gray scale and implement thresh holding algorithm on an image. Threshold can be set depending on the object to be detected. Gray scale image will be converted to binary during thresh holding process. To reduce noise, to improve the robustness of the system, and to reduce the error rate in detection and tracking process, morphological image processing method for binary images is used. Morphological processing will be applied on binary image to remove small unwanted objects that are presented in a frame. A developed blob analysis technique for extracted binary image facilitates pedestrian and car detection. Processing blob’s information of relative size and location leads to distinguishing between pedestrian and car. The threshold, morphological and blobs process is applied to all frames in a video and finally original video with tagged cars will be displayed.


2021 ◽  
Vol 15 ◽  
Author(s):  
Grégoire Python ◽  
Pauline Pellet Cheneval ◽  
Caroline Bonnans ◽  
Marina Laganaro

Background: Even if both phonological and semantic cues can facilitate word retrieval in aphasia, it remains unclear if their respective effectiveness varies according to the underlying anomic profile.Aim: The aim of the present facilitation study is to compare the effect of phonological and semantic cues on picture naming accuracy and speed in different types of anomia.Methods: In the present within-subject design study, 15 aphasic persons following brain damage underwent picture naming paradigms with semantic cues (categorically- or associatively related) and phonological cues (initial phoneme presented auditorily, visually or both).Results: At the group level, semantic cueing was as effective as phonological cueing to significantly speed up picture naming. However, while phonological cues were effective regardless of the anomic profile, semantic cueing effects varied depending on the type of anomia. Participants with mixed anomia showed facilitation after both semantic categorical and associative cues, but individuals with lexical-phonological anomia only after categorical cues. Crucially, semantic cues were ineffective for participants with lexical-semantic anomia. These disparities were confirmed by categorical semantic facilitation decreasing when semantic/omission errors prevailed in the anomic profile, but increasing alongside phonological errors.Conclusion: The effectiveness of phonological vs semantic cues seems related to the underlying anomic profile: phonological cues benefit any type of anomia, but semantic cues only lexical-phonological or mixed anomia.


Author(s):  
Saif alZahir ◽  
Syed M. Naqvi

In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and 1-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.


Author(s):  
SATOSHI SUZUKI ◽  
NAONORI UEDA ◽  
JACK SKLANSKY

A thinning method for binary images is proposed which converts digital binary images into line patterns. The proposed method suppresses shape distortion as well as false feature points, thereby producing more natural line patterns than existing methods. In addition, this method guarantees that the produced line patterns are one pixel in width everywhere. In this method, an input binary image is transformed into a graph in which 1-pixels correspond to nodes and neighboring nodes are connected by edges. Next, nodes unnecessary for preserving the topology of the input image and the edges connecting them are deleted symmetrically. Then, edges that do not contribute to the preservation of the topology of the input image are deleted. The advantages of this graph-based thinning method are confirmed by applying it to ideal line patterns and geographical maps.


2005 ◽  
Vol 05 (01) ◽  
pp. 67-87 ◽  
Author(s):  
HAIPING LU ◽  
YUN Q. SHI ◽  
ALEX C. KOT ◽  
LIHUI CHEN

Digital watermarking has been proposed for the protection of digital medias. This paper presents two watermarking algorithms for binary images. Both algorithms involve a blurring preprocessing and a biased binarization. After the blurring, the first algorithm embeds a watermark by modifying the DC components of the Discrete Cosine Transform (DCT), followed by a biased binarization, and the second one embeds a watermark by directly biasing the binarization threshold of the blurred image, controlled by a loop. Experimental results show the imperceptibility and robustness aspects of both algorithms.


2011 ◽  
Vol 103 ◽  
pp. 658-666
Author(s):  
Hideaki Kawano ◽  
Hideaki Orii ◽  
Hiroshi Maeda

In this paper, a method which specifies the signboard region and extracts the charactersinside the signboard is proposed.We usually take notes not to forget what we should leave to memory.But it is often that the task is too troublesome. Our aim is the development of a new input-interface soas to input texts froma picture.Most of signboards are composed of almostmonochromatic region. Onthe basis of this observation, image segmentation using color information is applied, and then we getsome binary images by applying threshold for each segmented region. Each binary image is enclosedby the smallest circumscribed square. The signboard region is specified according to distribution andarea of the white pixels inside the square. As a result of experiment, we confirmed the effectivenessof the proposed method.


Informatics ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 25-35
Author(s):  
J. Ma ◽  
V. Yu. Tsviatkou ◽  
V. K. Kanapelka

This paper is focused on the field of the skeletonization of the binary image. Skeletonization makes it possible to represent a binary image in the form of many thin lines, the relative position, sizes and shape of which adequately describe the size, shape and orientation in space of the corresponding image areas. Skeletonization has many variety methods. Iterative parallel algorithms provide high quality skeletons. They can be implemented using one or more sub-iterations. In each iteration, redundant pixels, the neighborhoods of which meet certain conditions, are removed layer by layer along the contour and finally they leave only the skeleton. Many one-sub-iterations algorithms are characterized by a breakdown in connectivity and the formation of excess skeleton fragments. The highest-quality skeletons are formed by the well-known single-iteration OPTA algorithm, which based on 18 binary masks, but it is sensitive to contour noise and has a high computational complexity. The Zhang and Suen two-iteration algorithm (ZS), which is based on 6 logical conditions, is widely used due to its relative simplicity. But it suffers from the problem of the blurs of the diagonal lines with a thickness of 2 pixels and the lost of the square which size is 2×2 pixels. Besides, both algorithms mentioned above do not achieve the unit pixel thickness of the skeleton lines (many non-node pixels have more than two neighbors). Mathematical model and OPCA (One-Pass Combination Algorithm) algorithm which is based on a combination and simplification of single-iterative OPTA and two-iterative ZS are proposed for constructing extremely thin bound skeletons of binary images with low computational complexity. These model and algorithm also made it possible to accelerate the speed of skeletonization, to enhance recoverability of the original image on the skeleton and to reduce the redundancy of the bonds of the skeleton elements.


Sign in / Sign up

Export Citation Format

Share Document