ground segmentation
Recently Published Documents


TOTAL DOCUMENTS

156
(FIVE YEARS 32)

H-INDEX

19
(FIVE YEARS 3)

2021 ◽  
Author(s):  
◽  
Yuyu Liang

<p>Figure-ground segmentation is a process of separating regions of interest from unimportant backgrounds. It is essential to various applications in computer vi- sion and image processing, e.g. object tracking and image editing, as they are only interested in certain regions of an image and use figure-ground segmenta- tion as a pre-processing step. Traditional figure-ground segmentation methods often require heavy human workload (e.g. ground truth labeling), and/or rely heavily on human guidance (e.g. locating an initial model), accordingly cannot easily adapt to diverse image domains.  Evolutionary computation (EC) is a family of algorithms for global optimi- sation, which are inspired by biological evolution. As an EC technique, genetic programming (GP) can evolve algorithms automatically for complex problems without pre-defining solution models. Compared with other EC techniques, GP is more flexible as it can utilise complex and variable-length representations (e.g. trees) of candidate solutions. It is hypothesised that this flexibility of GP makes it possible to evolve better solutions than those designed by experts. However, there have been limited attempts at applying GP to figure-ground segmentation.  In this thesis, GP is enabled to successfully address figure-ground segmentation through evolving well-performing segmentors and generating effective features. The objectives are to investigate various image features as inputs of GP, develop multi-objective approaches, develop feature selection/construction methods, and conduct further evaluations of the proposed GP methods. The following new methods have been developed.  Effective terminal sets of GP are investigated for figure-ground segmentation, covering three general types of image features, i.e. colour/brightness, texture and shape features. Results show that texture features are more effective than intensities and shape features as they are discriminative for different materials that foreground and background regions normally belong to (e.g. metal or wood).  Two new multi-objective GP methods are proposed to evolve figure-ground segmentors, aiming at producing solutions balanced between the segmentation performance and solution complexity. Compared with a reference method that does not consider complexity and a parsimony pressure based method (a popular bloat control technique), the proposed methods can significantly reduce the solution size while achieving similar segmentation performance based on the Mann- Whitney U-Test at the significance level 5%.  GP is introduced for the first time to conduct feature selection for figure- ground segmentation tasks, aiming to maximise the segmentation performance and minimise the number of selected features. The proposed methods produce feature subsets that lead to solutions achieving better segmentation performance with lower features than those of two benchmark methods (i.e. sequential forward selection and sequential backward selection) and the original full feature set. This is due to GP’s high search ability and higher likelihood of finding the global optima.  GP is introduced for the first time to construct high-level features from primitive image features, which aims to improve the image segmentation performance, especially on complex images. By considering linear/non-linear interactions of the original features, the proposed methods construct fewer features that achieve better segmentation performance than the original full feature set.  This investigation has shown that GP is suited for figure-ground image segmentation for the following reasons. Firstly, the proposed methods can evolve segmentors with useful class characteristic patterns to segment various types of objects. Secondly, the segmentors evolved from one type of foreground object can generalise well on similar objects. Thirdly, both the selected and constructed features of the proposed GP methods are more effective than original features, with the selected/constructed features being better for subsequent tasks. Finally, compared with other segmentation techniques, the major strengths of GP are that it does not require pre-defined problem models, and can be easily adapted to diverse image domains without major parameter tuning or human intervention.</p>


2021 ◽  
Author(s):  
◽  
Yuyu Liang

<p>Figure-ground segmentation is a process of separating regions of interest from unimportant backgrounds. It is essential to various applications in computer vi- sion and image processing, e.g. object tracking and image editing, as they are only interested in certain regions of an image and use figure-ground segmenta- tion as a pre-processing step. Traditional figure-ground segmentation methods often require heavy human workload (e.g. ground truth labeling), and/or rely heavily on human guidance (e.g. locating an initial model), accordingly cannot easily adapt to diverse image domains.  Evolutionary computation (EC) is a family of algorithms for global optimi- sation, which are inspired by biological evolution. As an EC technique, genetic programming (GP) can evolve algorithms automatically for complex problems without pre-defining solution models. Compared with other EC techniques, GP is more flexible as it can utilise complex and variable-length representations (e.g. trees) of candidate solutions. It is hypothesised that this flexibility of GP makes it possible to evolve better solutions than those designed by experts. However, there have been limited attempts at applying GP to figure-ground segmentation.  In this thesis, GP is enabled to successfully address figure-ground segmentation through evolving well-performing segmentors and generating effective features. The objectives are to investigate various image features as inputs of GP, develop multi-objective approaches, develop feature selection/construction methods, and conduct further evaluations of the proposed GP methods. The following new methods have been developed.  Effective terminal sets of GP are investigated for figure-ground segmentation, covering three general types of image features, i.e. colour/brightness, texture and shape features. Results show that texture features are more effective than intensities and shape features as they are discriminative for different materials that foreground and background regions normally belong to (e.g. metal or wood).  Two new multi-objective GP methods are proposed to evolve figure-ground segmentors, aiming at producing solutions balanced between the segmentation performance and solution complexity. Compared with a reference method that does not consider complexity and a parsimony pressure based method (a popular bloat control technique), the proposed methods can significantly reduce the solution size while achieving similar segmentation performance based on the Mann- Whitney U-Test at the significance level 5%.  GP is introduced for the first time to conduct feature selection for figure- ground segmentation tasks, aiming to maximise the segmentation performance and minimise the number of selected features. The proposed methods produce feature subsets that lead to solutions achieving better segmentation performance with lower features than those of two benchmark methods (i.e. sequential forward selection and sequential backward selection) and the original full feature set. This is due to GP’s high search ability and higher likelihood of finding the global optima.  GP is introduced for the first time to construct high-level features from primitive image features, which aims to improve the image segmentation performance, especially on complex images. By considering linear/non-linear interactions of the original features, the proposed methods construct fewer features that achieve better segmentation performance than the original full feature set.  This investigation has shown that GP is suited for figure-ground image segmentation for the following reasons. Firstly, the proposed methods can evolve segmentors with useful class characteristic patterns to segment various types of objects. Secondly, the segmentors evolved from one type of foreground object can generalise well on similar objects. Thirdly, both the selected and constructed features of the proposed GP methods are more effective than original features, with the selected/constructed features being better for subsequent tasks. Finally, compared with other segmentation techniques, the major strengths of GP are that it does not require pre-defined problem models, and can be easily adapted to diverse image domains without major parameter tuning or human intervention.</p>


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 6996
Author(s):  
Boyu Kuang ◽  
Zeeshan A. Rana ◽  
Yifan Zhao

Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision.


2021 ◽  
Vol 6 (4) ◽  
pp. 7285-7292
Author(s):  
Pengxin Chen ◽  
Wenzhong Shi ◽  
Sheng Bao ◽  
Muyang Wang ◽  
Wenzheng Fan ◽  
...  

2021 ◽  
Vol 13 (16) ◽  
pp. 3239
Author(s):  
Zhihao Shen ◽  
Huawei Liang ◽  
Linglong Lin ◽  
Zhiling Wang ◽  
Weixin Huang ◽  
...  

LiDAR occupies a vital position in self-driving as the advanced detection technology enables autonomous vehicles (AVs) to obtain much environmental information. Ground segmentation for LiDAR point cloud is a crucial procedure to ensure AVs’ driving safety. However, some current algorithms suffer from embarrassments such as unavailability on complex terrains, excessive time and memory usage, and additional pre-training requirements. The Jump-Convolution-Process (JCP) is proposed to solve these issues. JCP converts the segmentation problem of the 3D point cloud into the smoothing problem of the 2D image and takes little time to improve the segmentation effect significantly. First, the point cloud marked by an improved local feature extraction algorithm is projected onto an RGB image. Then, the pixel value is initialized with the points’ label and continuously updated according to image convolution. Finally, a jump operation is introduced in the convolution process to perform calculations only on the low-confidence points filtered by the credibility propagation algorithm, reducing the time cost. Experiments on three datasets show that our approach has a better segmentation accuracy and terrain adaptability than those of the three existing methods. Meanwhile, the average time for the proposed method to deal with one scan data of 64-beam and 128-beam LiDAR is only 8.61 ms and 15.62 ms, which fully meets the AVs’ requirement for real-time performance.


Author(s):  
Yawei Zhao ◽  
Yanju Liu ◽  
Yang Yu ◽  
Jiawei Zhou

Aiming at the problems of poor segmentation effect, low efficiency and poor robustness of the Ransac ground segmentation algorithm, this paper proposes a radar segmentation algorithm based on Ray-Ransac. This algorithm combines the structural characteristics of three-dimensional lidar and uses ray segmentation to generate the original seed point set. The random sampling of Ransac algorithm is limited to the original seed point set, which reduces the probability that Ransac algorithm extracts outliers and reduces the calculation. The Ransac algorithm is used to modify the ground model parameters so that the algorithm can adapt to the undulating roads. The standard deviation of the distance from the point to the plane model is used as the distance threshold, and the allowable error range of the actual point cloud data is considered to effectively eliminate the abnormal points and error points. The algorithm was tested on the simulation platform and the test vehicle. The experimental results show that the lidar point cloud ground segmentation algorithm proposed in this paper takes an average of 5.784 milliseconds per frame, which has fast speed and good precision. It can adapt to uneven road surface and has high robustness.


2021 ◽  
Author(s):  
Mingce Guo ◽  
Lei Zhang ◽  
Xiao Liu ◽  
Zhenjun Du ◽  
Jilai Song ◽  
...  

2021 ◽  
Author(s):  
Francisco J. Luongo ◽  
Lu Liu ◽  
Chun Lum Andy Ho ◽  
Janis K. Hesse ◽  
Joseph B Wekselblatt ◽  
...  

The rodent visual system has attracted great interest in recent years due to its experimental tractability, but the fundamental mechanisms used by the mouse to represent the visual world remain unclear. In the primate, researchers have argued from both behavioral and neural evidence that a key step in visual representation is "figure-ground segmentation," the delineation of figures as distinct from backgrounds [1-4]. To determine if mice also show behavioral and neural signatures of figure-ground segmentation, we trained mice on a figure-ground segmentation task where figures were defined by gratings and naturalistic textures moving counterphase to the background. Unlike primates, mice were severely limited in their ability to segment figure from ground using the opponent motion cue, with segmentation behavior strongly dependent on the specific carrier pattern. Remarkably, when mice were forced to localize naturalistic patterns defined by opponent motion, they adopted a strategy of brute force memorization of texture patterns. In contrast, primates, including humans, macaques, and mouse lemurs, could readily segment figures independent of carrier pattern using the opponent motion cue. Consistent with mouse behavior, neural responses to the same stimuli recorded in mouse visual areas V1, RL, and LM also did not support texture-invariant segmentation of figures using opponent motion. Modeling revealed that the texture dependence of both the mouse's behavior and neural responses could be explained by a feedforward neural network lacking explicit segmentation capabilities. These findings reveal a fundamental limitation in the ability of mice to segment visual objects compared to primates.


Sign in / Sign up

Export Citation Format

Share Document