scholarly journals Parallel region growing of half-tone images based on selected average brightness of the area along the growth route

Doklady BGUIR ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. 83-91
Author(s):  
V. Yu. Tsviatkou

The problem of parallel segmentation of halftone images by brightness for implementation on the basis of programmable logic integrated circuits is considered. Segmentation divides an image into regions formed from pixels with approximately the same brightness, and is a computationally complex operation due to multiple checks of the value of each pixel for the possibility of joining an adjacent region. To speed up segmentation, parallel algorithms for growing areas have been developed, in which processing begins from the neighborhoods of preselected initial growth pixels. The condition of joining an adjacent pixel to an area takes into account the average brightness of the area to limit the variance of its pixel values. Therefore, when each new pixel is added to the area, its average brightness is recalculated. This leads to high time complexity. In some parallel algorithms, the sample mean is calculated in a small window, which makes it possible to slightly reduce the time complexity when matching the window size with the segment sizes. To significantly reduce the temporal complexity, the article proposes a model for the parallel growth of image regions based on a simplified condition for joining adjacent pixels to a region, taking into account the sample average value of the region's brightness along the growth route connecting the boundary pixel of the region and the initial growth pixel through a sequence of pixels used to attach the considered boundary pixel to area. A significant decrease in the temporal complexity of the proposed model of parallel growing of image regions in comparison with the known models is achieved due to a slight increase in the spatial complexity.

1995 ◽  
Vol 05 (02) ◽  
pp. 179-190 ◽  
Author(s):  
WENTONG CAI ◽  
DAVID B. SKILLICORN

The Bird-Meetens formalism is an approach to software development and computation based on datatype theories. In this paper we build new operators for the theory of lists that compute generalized recurrences and show that they have logarithmic parallel time complexity. As many applications can be cast as forms of recurrences, this allows a large range of parallel algorithms to be derived within the Bird-Meertens formalism. We illustrate by deriving a parallel solution to the maximum segment sum problem.


Author(s):  
Amit Khan ◽  
Dipankar Majumdar

In the last few decades huge amounts and diversified work has been witnessed in the domain of de-noising of binary images through the evolution of the classical techniques. These principally include analytical techniques and approaches. Although the scheme was working well, the principal drawback of these classical and analytical techniques are that the information regarding the noise characteristics is essential beforehand. In addition to that, time complexity of analytical works amounts to beyond practical applicability. Consequently, most of the recent works are based on heuristic-based techniques conceding to approximate solutions rather than the best ones. In this chapter, the authors propose a solution using an iterative neural network that applies iterative spatial filtering technology with critically varied size of the computation window. With critical variation of the window size, the authors are able to show noted acceleration in the filtering approach (i.e., obtaining better quality filtration with lesser number of iterations).


2006 ◽  
Vol 16 (04) ◽  
pp. 429-440 ◽  
Author(s):  
PRASANTA K. JANA ◽  
BHABANI P. SINHA

Wang and Sahni [4] reported two parallel algorithms for N-point prefix computation on an N-processor OTIS-Mesh optoelectronic computer. The overall time complexity for both SIMS and MIMD models of their first algorithm was shown to be (8 N1/4 - 1) electronic moves and 2 OTIS moves. This was further reduced to (7 N1/4 - 1) electronic moves and 2 OTIS moves in their second algorithm. We present here an improved parallel algorithm for N-point prefix computation on an N-processor OTIS-Mesh, which needs (5.5 N1/4 + 3) electronic moves and 2 OTIS moves. Our algorithm is based on the general theme of parallel prefix algorithm proposed in [4] but following the data distribution and local prefix computation similar to that of [1].


Author(s):  
Kishor D. Bhalerao ◽  
James Critchley ◽  
Denny Oetomo ◽  
Roy Featherstone ◽  
Oussama Khatib

This paper presents a new parallel algorithm for the operational space dynamics of unconstrained serial manipulators, which outperforms contemporary sequential and parallel algorithms in the presence of two or more processors. The method employs a hybrid divide and conquer algorithm (DCA) multibody methodology which brings together the best features of the DCA and fast sequential techniques. The method achieves a logarithmic time complexity (O(log(n)) in the number of degrees of freedom (n) for computing the operational space inertia (Λe) of a serial manipulator in presence of O(n) processors. The paper also addresses the efficient sequential and parallel computation of the dynamically consistent generalized inverse (J¯e) of the task Jacobian, the associated null space projection matrix (Ne), and the joint actuator forces (τnull) which only affect the manipulator posture. The sequential algorithms for computing J¯e, Ne, and τnull are of O(n), O(n2), and O(n) computational complexity, respectively, while the corresponding parallel algorithms are of O(log(n)), O(n), and O(log(n)) time complexity in the presence of O(n) processors.


2011 ◽  
Vol 480-481 ◽  
pp. 1298-1301
Author(s):  
Juan Wang

This paper proposed a method to measure destructive pavement area based on region growing and topological relation. Firstly, separate the image of smeary area and calibrate plate area according to the image of topological relations. Then, the closed boundary could be achieved by utilizing the algorithms of region growing and boundary tracking, and the pixels in this area could be calculated. Finally, the practical area can be calculated through the ratio relation between the area and the calibrate plate. It’s demonstrated through experiments that the method has low time complexity and high robust.


2019 ◽  
Vol 35 (1) ◽  
pp. 21-37
Author(s):  
Trường Huy Nguyễn

In this paper, we introduce two efficient algorithms in practice for computing the length of a longest common subsequence of two strings, using automata technique, in sequential and parallel ways. For two input strings of lengths m and n with m ≤ n, the parallel algorithm uses k processors (k ≤ m) and costs time complexity O(n) in the worst case, where k is an upper estimate of the length of a longest common subsequence of the two strings. These results are based on the Knapsack Shaking approach proposed by P. T. Huy et al. in 2002. Experimental results show that for the alphabet of size 256, our sequential and parallel algorithms are about 65.85 and 3.41m times faster than the standard dynamic programming algorithm proposed by Wagner and Fisher in 1974, respectively.


Author(s):  
WOJCIECH RYTTER ◽  
AHMED SAOUDI

We investigate the parallel complexity of recognition problems for context-free and regular array (image) sets. We show that the sequential time complexity of the recognition of an n × n image is O(n5). The space required for these recognition problems is O(n5). We prove that there are log 2n time parallel algorithms with BM (n4) and n2 BM (n) processors for the recognition of context-free and regular array sets, respectively, where BM (n) is the number of processors sufficient to multiply two boolean n × n matrices in logarithmic time. We develop also a methodology for processing images using composition systems.


2013 ◽  
Vol 401-403 ◽  
pp. 1859-1863
Author(s):  
Qing Yang ◽  
Jun Liu ◽  
Huan Wang ◽  
Wen Li Zhou ◽  
Hua Yu

Understanding traffic per unit time in cell dimension in cellular data network can be of great help for mobile operators to improve the performance of the cellular data network. It is important for network design and resource optimization. In this paper, we describe three methods to count the traffic per unit time per cell. Moreover, we compare the results of the three methods by the deviation distribution of the traffic and time complexity analysis. Our work is distinguished from other related work by using big data which contains around 1.4 billion records and 20 thousands cells. Generally, we expect this paper could deliver important insights into cellar data network resource optimization.


Sign in / Sign up

Export Citation Format

Share Document