scholarly journals FPGA-Based Connected Component Algorithm for Vegetation Segmentation

In the process of automatic trees recognition and tracking, image target is captured by RGB camera mounted on a UAV, in processing step image captured is subjected to threshold and extract selected information, This techniques may be applied to recognize objects with different shapes and sizes. In the case of remote sensing vegetation, the image usually contains multiple connected areas or overlapped trees; the proposed system uses the shape characteristics of the image target to self-identify the suspicious overlapped features. This technique allows distinguish, analyze and detect different features in images by assigning a unique label to all pixels that refers to the same entity or object. In the process of automatically recognizing and tracking the target of an image, it is first segmented and extracted. The resulting binary image usually contains several connected regions. The system uses the shape characteristics of the target in the image to automatically identify the suspected overlapped trees. Therefore, it is necessary to detect and evaluate each connected area block separately, in this paper, the improved FPGA-specific rapid marking algorithm is used to detect and extract each connected domain.

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Takashi Oguchi

<p><strong>Abstract.</strong> Geomorphology is a scientific discipline dealing with the characteristics, origin, and evolution of landforms. It utilizes topographic data such as spot height information, contour lines on topographic maps, and DEMs (Digital Elevation Models). Topographic data were traditionally obtained by ground surveying, but introduction of aerial photogrammetry in the early 20th century enabled more efficient data acquisition based on remote sensing. In recent years, active remote sensing methods including airborne and terrestrial laser scanning and applications of satellite radar have also been employed, and aerial photogrammetry has become easier and popular thanks to drones and a new photogrammetric method, SfM (Structure from Motion). The resultant topographic data especially raster DEMs are combined with GIS (Geographic Information Systems) to obtain derivatives such as slope and aspect as well as to conduct efficient geomorphological mapping. Resultant maps can depict various topographic characteristics based on surface height and DEM derivatives, and applications of advanced algorithms and some heuristic reasoning permit semi-automated landform classification. This quantitative approach differs from traditional and more qualitative methods to produce landform classification maps using visual interpretation of analogue aerial photographs and topographic maps as well as field observations.</p><p>For scientific purposes, landforms need to be classified based on not only shape characteristics but also formation processes and ages. Among them, DEMs only represent shape characteristics, and understanding formation processes and ages usually require other data such as properties of surficial deposits observed in the field. However, numerous geomorphological studies indicate relationships between shapes and forming-processes of landforms, and even ages of landforms affect shapes such as a wider distribution of dissected elements within older landforms. Recent introduction of artificial intelligence in geomorphology including machine learning and deep learning may permit us to better understand the relationships of shapes with processes and ages. Establishing such relationships, however, is still highly challenging, and at this moment most geomorphologists think landform classification maps based on the traditional methods are more usable than those from the DEM-based methods. Nevertheless, researchers of some other fields such as civil engineering more appreciate the DEM-based methods because they can be conducted without deep geomorphological knowledge. Therefore, the methods should be developed for interdisciplinary understanding. This paper reviews and discusses such complex situations of geomorphological mapping today in relation to historical development of methodology.</p>


2020 ◽  
Vol 140 ◽  
pp. 102751
Author(s):  
Eduardo Cuesta ◽  
Carmen Quintano ◽  
Alfonso Fernández–Manso

Author(s):  
M. Sumathi ◽  
T. Balaji

The main objective of this paper is to carry out a detailed analysis of the most popular Connected Component Labeling (CCL) algorithms for remote sensing image classification. This algorithm searches line-by-line, top to bottom to assign a splotch label to each current pixel that is connected to a splotch. This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. It assigns a label to a new object, most labeling algorithms use a scanning step that examines some of its neighbors. The first strategy deeds the dependencies among the neighbors to reduce the number of neighbors examined. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based deep rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. The connected component labeling assigns labels to a pixel such that adjacent pixels of the same features are assigned the same label. The paper presents a modification of this algorithm that allows the resolution of merged labels and experimental results demonstrate that proposed method is much more efficient than conventional methods for various kinds of color images. This method is improving the labeling algorithms and also benefits for other applications in computer vision and pattern recognition


Author(s):  
LIFENG HE ◽  
YUYAN CHAO ◽  
KENJI SUZUKI

This paper presents a run- and label-equivalence-based one-and-a-half-scan algorithm for labeling connected components in a binary image. Major differences between our algorithm and conventional label-equivalence-based algorithms are: (1) all conventional label-equivalence-based algorithms scan all pixels in the given image at least twice, whereas our algorithm scans background pixels once and object pixels twice; (2) all conventional label-equivalence-based algorithms assign a provisional label to each object pixel in the first scan and relabel the pixel in the later scan(s), whereas our algorithm assigns a provisional label to each run in the first scan, and after resolving label equivalences between runs, by using the recorded run data, it assigns each object pixel a final label directly. That is, in our algorithm, relabeling of object pixels is not necessary any more. Experimental results demonstrated that our algorithm is highly efficient on images with many long runs and/or a small number of object pixels. Moreover, our algorithm is directly applicable to run-length-encoded images, and we can obtain contours of connected components efficiently.


Author(s):  
Dongchen Li ◽  
Shengyong Xu ◽  
Yuezhi Zheng ◽  
Changgui Qi ◽  
Pengjiao Yao

Visual navigation is one of the fundamental techniques of intelligent cotton-picking robot. Cotton field composition is complex and the presence of occlusion and illumination makes it hard to accurately identify furrows so as to extract the navigation line. In this paper, a new field navigation path extraction method based on horizontal spline segmentation is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm to segment the binary image of the furrow. The cotton field image components are divided into four categories: furrow (ingredients include land, wilted leaves, etc.), cotton fiber, other organs of cotton and the outside area or obstructions. By using the significant differences in hue and value of the HSV model, the authors segment the threshold by two steps. Firstly, they segment cotton wool in the S channel, and then segment the furrow in the V channel in the area outside the cotton wool area. In addition, morphological processing is needed to filter out small noise area. Secondly, the horizontal spline is used to segment the binary image. The authors detect the connected domains in the horizontal splines, and merger the isolate small areas caused by the cotton wool or light spots in the nearby big connected domains so as to get connected domain of the furrow. Thirdly, they make the center of the bottom of the image as the starting point, and successively select the candidate point from the midpoint of the connected domain, according to the principle that the distance between adjacent navigation line candidate is smaller. Finally, the authors count the number of the connected domains and calculate the change of parameters of boundary line of the connected domain to make sure whether the robot reaches the outside of the field or encounters obstacles. If there is no anomaly, the navigation path is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective, which is suitable for visual navigation in the complex environment of a cotton field in different phases.


2010 ◽  
Vol 21 (03) ◽  
pp. 405-425 ◽  
Author(s):  
YASUAKI ITO ◽  
KOJI NAKANO

Connected component labeling is a process that assigns unique labels to the connected components of a binary image. The main contribution of this paper is to present a low-latency hardware connected component labeling algorithm for k-concave binary images designed and implemented in FPGA. Pixels of a binary image are given to the FPGA in raster order, and the resulting labels are also output in the same order. The advantage of our labeling algorithm is low latency and to use a small internal storage of the FPGA. We have implemented our hardware labeling algorithm in an Altera Stratix Family FPGA, and evaluated the performance. The implementation result shows that for a 10-concave binary image of 2048 × 2048, our connected component labeling algorithm runs in approximately 70ms and its latency is approximately 750µs.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 29 ◽  
Author(s):  
P Sudharshan Duth ◽  
M Mary Deepa

This research work introduces a method of using color thresholds to identify two-dimensional images in MATLAB using the RGB Color model to recognize the Color preferred by the user in the picture. Methodologies including image color detection convert a 3-D RGB Image into a Gray-scale Image, at that point subtract the two pictures to obtain a 2-D black-and-white picture, filtering the noise picture elements using a median filter, detecting with a connected component mark digital pictures in the connected area and utilize the bounding box and its properties to calculate the metric for every marking area. In addition, the shade of the picture element is identified by examining the RGB value of every picture element present in the picture. Color Detection algorithm is executed utilizing the MATLAB  Picture handling Toolkit. The result of this implementation can be used in as a bit of security applications such as spy robots, object tracking, Color-based object isolation, and intrusion detection. 


2021 ◽  
Vol 13 (3) ◽  
pp. 357 ◽  
Author(s):  
Chao Wang ◽  
Yan Zhang ◽  
Xiaohui Chen ◽  
Hao Jiang ◽  
Mithun Mukherjee ◽  
...  

High-resolution remote sensing (HRRS) images, when used for building detection, play a key role in urban planning and other fields. Compared with the deep learning methods, the method based on morphological attribute profiles (MAPs) exhibits good performance in the absence of massive annotated samples. MAPs have been proven to have a strong ability for extracting detailed characterizations of buildings with multiple attributes and scales. So far, a great deal of attention has been paid to this application. Nevertheless, the constraints of rational selection of attribute scales and evidence conflicts between attributes should be overcome, so as to establish reliable unsupervised detection models. To this end, this research proposes a joint optimization and fusion building detection method for MAPs. In the pre-processing step, the set of candidate building objects are extracted by image segmentation and a set of discriminant rules. Second, the differential profiles of MAPs are screened by using a genetic algorithm and a cross-probability adaptive selection strategy is proposed; on this basis, an unsupervised decision fusion framework is established by constructing a novel statistics-space building index (SSBI). Finally, the automated detection of buildings is realized. We show that the proposed method is significantly better than the state-of-the-art methods on HRRS images with different groups of different regions and different sensors, and overall accuracy (OA) of our proposed method is more than 91.9%.


Sign in / Sign up

Export Citation Format

Share Document