scholarly journals A Connected Component Labelling algorithm for a multi-pixel per clock cycle video stream

Author(s):  
Marcin Kowalczyk ◽  
Tomasz Kryjak
2021 ◽  
Author(s):  
Marcin Kowalczyk ◽  
Tomasz Kryjak

This work describes the hardware implementation of a connected component labelling (CCL) module in reprogammable logic. The main novelty of the design is the ``full'', i.e. without any simplifications, support of a 4 pixel per clock format (4 ppc) and real-time processing of a 4K/UltraHD video stream (3840 x 2160 pixels) at 60 frames per second. To achieve this, a special labelling method was designed and a functionality that stops the input data stream in order to process pixel groups which require writing more than one merger into the equivalence table. The proposed module was verified in simulation and in hardware on the Xilinx Zynq Ultrascale+ MPSoC chip on the ZCU104 evaluation board.


2021 ◽  
Author(s):  
Marcin Kowalczyk ◽  
Tomasz Kryjak

This work describes the hardware implementation of a connected component labelling (CCL) module in reprogammable logic. The main novelty of the design is the ``full'', i.e. without any simplifications, support of a 4 pixel per clock format (4 ppc) and real-time processing of a 4K/UltraHD video stream (3840 x 2160 pixels) at 60 frames per second. To achieve this, a special labelling method was designed and a functionality that stops the input data stream in order to process pixel groups which require writing more than one merger into the equivalence table. The proposed module was verified in simulation and in hardware on the Xilinx Zynq Ultrascale+ MPSoC chip on the ZCU104 evaluation board.


Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 292
Author(s):  
Stefania Perri ◽  
Fanny Spagnolo ◽  
Pasquale Corsonello

Connected component labeling is one of the most important processes for image analysis, image understanding, pattern recognition, and computer vision. It performs inherently sequential operations to scan a binary input image and to assign a unique label to all pixels of each object. This paper presents a novel hardware-oriented labeling approach able to process input pixels in parallel, thus speeding up the labeling task with respect to state-of-the-art competitors. For purposes of comparison with existing designs, several hardware implementations are characterized for different image sizes and realization platforms. The obtained results demonstrate that frame rates and resource efficiency significantly higher than existing counterparts are achieved. The proposed hardware architecture is purposely designed to comply with the fourth generation of the advanced extensible interface (AXI4) protocol and to store intermediate and final outputs within an off-chip memory. Therefore, it can be directly integrated as a custom accelerator in virtually any modern heterogeneous embedded system-on-chip (SoC). As an example, when integrated within the Xilinx Zynq-7000 X C7Z020 SoC, the novel design processes more than 1.9 pixels per clock cycle, thus furnishing more than 30 2k × 2k labeled frames per second by using 3688 Look-Up Tables (LUTs), 1415 Flip Flops (FFs), and 10 kb of on-chip memory.


2011 ◽  
Vol 47 (24) ◽  
pp. 1309 ◽  
Author(s):  
P. Chen ◽  
H.L. Zhao ◽  
C. Tao ◽  
H.S. Sang

1997 ◽  
Vol 15 (3) ◽  
pp. 145-156 ◽  
Author(s):  
Petter Ranefall ◽  
Lars Egevad ◽  
Bo Nordin ◽  
Ewert Bengtsson

A new method for segmenting images of immunohistochemically stained cell nuclei is presented. The aim is to distinguish between cell nuclei with a positive staining reaction and other cell nuclei, and to make it possible to quantify the reaction. First, a new supervised algorithm for creating a pixel classifier is applied to an image that is typical for the sample. The training phase of the classifier is very user friendly since only a few typical pixels for each class need to be selected. The classifier is robust in that it is non‐parametric and has a built‐in metric that adapts to the colour space. After the training the classifier can be applied to all images from the same staining session. Then, all pixels classified as belonging to nuclei of cells are grouped into individual nuclei through a watershed segmentation and connected component labelling algorithm. This algorithm also separates touching nuclei. Finally, the nuclei are classified according to their fraction of positive pixels.


Author(s):  
Marcin Kowalczyk ◽  
Piotr Ciarach ◽  
Dominika Przewlocka-Rus ◽  
Hubert Szolc ◽  
Tomasz Kryjak

AbstractIn this paper, a hardware implementation in reconfigurable logic of a single-pass connected component labelling (CCL) and connected component analysis (CCA) module is presented. The main novelty of the design is the support of a video stream in 2 and 4 pixel per clock format (2 and 4 ppc) and real-time processing of 4K/UHD video stream (3840 x 2160 pixels) at 60 frames per second. We discuss several approaches to the issue and present in detail the selected ones. The proposed module was verified in an exemplary application – skin colour areas segmentation – on the ZCU 102 and ZCU 104 evaluation boards equipped with Xilinx Zynq UltraScale+ MPSoC devices.


2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


Sign in / Sign up

Export Citation Format

Share Document