segmentation problem
Recently Published Documents


TOTAL DOCUMENTS

160
(FIVE YEARS 44)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Volodymyr Sokol ◽  
Vitalii Krykun ◽  
Mariia Bilova ◽  
Ivan Perepelytsya ◽  
Volodymyr Pustovarov ◽  
...  

The demand for the creation of information systems that simplifies and accelerates work has greatly increased in the context of the rapidinformatization of society and all its branches. It provokes the emergence of more and more companies involved in the development of softwareproducts and information systems in general. In order to ensure the systematization, processing and use of this knowledge, knowledge managementsystems are used. One of the main tasks of IT companies is continuous training of personnel. This requires export of the content from the company'sknowledge management system to the learning management system. The main goal of the research is to choose an algorithm that allows solving theproblem of marking up the text of articles close to those used in knowledge management systems of IT companies. To achieve this goal, it is necessaryto compare various topic segmentation methods on a dataset with a computer science texts. Inspec is one such dataset used for keyword extraction andin this research it has been adapted to the structure of the datasets used for the topic segmentation problem. The TextTiling and TextSeg methods wereused for comparison on some well-known data science metrics and specific metrics that relate to the topic segmentation problem. A new generalizedmetric was also introduced to compare the results for the topic segmentation problem. All software implementations of the algorithms were written inPython programming language and represent a set of interrelated functions. Results were obtained showing the advantages of the Text Seg method incomparison with TextTiling when compared using classical data science metrics and special metrics developed for the topic segmentation task. Fromall the metrics, including the introduced one it can be concluded that the TextSeg algorithm performs better than the TextTiling algorithm on theadapted Inspec test data set.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Wenyun Gao ◽  
Xiaoyun Li ◽  
Sheng Dai ◽  
Xinghui Yin ◽  
Stanley Ebhohimhen Abhadiomhen

The low-rank representation (LRR) method has recently gained enormous popularity due to its robust approach in solving the subspace segmentation problem, particularly those concerning corrupted data. In this paper, the recursive sample scaling low-rank representation (RSS-LRR) method is proposed. The advantage of RSS-LRR over traditional LRR is that a cosine scaling factor is further introduced, which imposes a penalty on each sample to minimize noise and outlier influence better. Specifically, the cosine scaling factor is a similarity measure learned to extract each sample’s relationship with the low-rank representation’s principal components in the feature space. In order words, the smaller the angle between an individual data sample and the low-rank representation’s principal components, the more likely it is that the data sample is clean. Thus, the proposed method can then effectively obtain a good low-rank representation influenced mainly by clean data. Several experiments are performed with varying levels of corruption on ORL, CMU PIE, COIL20, COIL100, and LFW in order to evaluate RSS-LRR’s effectiveness over state-of-the-art low-rank methods. The experimental results show that RSS-LRR consistently performs better than the compared methods in image clustering and classification tasks.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Ran Gao ◽  
Li-Zhen Guo

The segmentation of weak boundary is still a difficult problem, especially sensitive to noise, which leads to the failure of segmentation. Based on the previous works, by adding the boundary indicator function with L 2,1 norm, a new convergent variational model is proposed. A novel strategy for the weak boundary image is presented. The existence of the minimizer for our model is given, by using the alternating direction method of multipliers (ADMMs) to solve the model. The experiments show that our new method is robust in segmentation of objects in a range of images with noise, low contrast, and direction.


2021 ◽  
Vol 19 (3) ◽  
pp. 26-39
Author(s):  
D. E. Shabalina ◽  
K. S. Lanchukovskaya ◽  
T. V. Liakh ◽  
K. V. Chaika

The article is devoted to evaluation of the applicability of existing semantic segmentation algorithms for the “Duckietown” simulator. The article explores classical semantic segmentation algorithms as well as ones based on neural networks. We also examined machine learning frameworks, taking into account all the limitations of the “Duckietown” simulator. According to the research results, we selected neural network algorithms based on U-Net, SegNet, DeepLab-v3, FC-DenceNet and PSPNet networks to solve the segmentation problem in the “Duckietown” project. U-Net and SegNet have been tested on the “Duckietown” simulator.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaogang Ren ◽  
Yue Wu ◽  
Zhiying Cao

Since the hippocampus is of small size, low contrast, and irregular shape, a novel hippocampus segmentation method based on subspace patch-sparsity clustering in brain MRI is proposed to improve the segmentation accuracy, which requires that the representation coefficients in different subspaces should be as sparse as possible, while the representation coefficients in the same subspace should be as average as possible. By restraining the coefficient matrix with the patch-sparse constraint, the coefficient matrix contains a patch-sparse structure, which is helpful to the hippocampus segmentation. The experimental results show that our proposed method is effective in the noisy brain MRI data, which can well deal with hippocampus segmentation problem.


2021 ◽  
Vol 11 (18) ◽  
pp. 8340
Author(s):  
Christian Ayala ◽  
Carlos Aranda ◽  
Mikel Galar

Building footprints and road networks are important inputs for a great deal of services. For instance, building maps are useful for urban planning, whereas road maps are essential for disaster response services. Traditionally, building and road maps are manually generated by remote sensing experts or land surveying, occasionally assisted by semi-automatic tools. In the last decade, deep learning-based approaches have demonstrated their capabilities to extract these elements automatically and accurately from remote sensing imagery. The building footprint and road network detection problem can be considered a multi-class semantic segmentation task, that is, a single model performs a pixel-wise classification on multiple classes, optimizing the overall performance. However, depending on the spatial resolution of the imagery used, both classes may coexist within the same pixel, drastically reducing their separability. In this regard, binary decomposition techniques, which have been widely studied in the machine learning literature, are proved useful for addressing multi-class problems. Accordingly, the multi-class problem can be split into multiple binary semantic segmentation sub-problems, specializing different models for each class. Nevertheless, in these cases, an aggregation step is required to obtain the final output labels. Additionally, other novel approaches, such as multi-task learning, may come in handy to further increase the performance of the binary semantic segmentation models. Since there is no certainty as to which strategy should be carried out to accurately tackle a multi-class remote sensing semantic segmentation problem, this paper performs an in-depth study to shed light on the issue. For this purpose, open-access Sentinel-1 and Sentinel-2 imagery (at 10 m) are considered for extracting buildings and roads, making use of the well-known U-Net convolutional neural network. It is worth stressing that building and road classes may coexist within the same pixel when working at such a low spatial resolution, setting a challenging problem scheme. Accordingly, a robust experimental study is developed to assess the benefits of the decomposition strategies and their combination with a multi-task learning scheme. The obtained results demonstrate that decomposing the considered multi-class remote sensing semantic segmentation problem into multiple binary ones using a One-vs.-All binary decomposition technique leads to better results than the standard direct multi-class approach. Additionally, the benefits of using a multi-task learning scheme for pushing the performance of binary segmentation models are also shown.


Author(s):  
Manu S

Road Lane detection is an important factor for Advanced Driver Assistant System (ADAS). In this paper, we propose a lane detection technology using deep convolutional neural network to extract lane marking features. Many conventional approaches detect the lane using the information of edge, color, intensity and shape. In addition, lane detection can be viewed as an image segmentation problem. However, most methods are sensitive to weather condition and noises; and thus, many traditional lane detection systems fail when the external environment has significant variation.


2021 ◽  
Vol 12 (3) ◽  
pp. 188-214
Author(s):  
Hamza Abdellahoum ◽  
Abdelmajid Boukra

The image segmentation problem is one of the most studied problems because it helps in several areas. In this paper, the authors propose new algorithms to resolve two problems, namely cluster detection and centers initialization. The authors opt to use statistical methods to automatically determine the number of clusters and the fuzzy sets theory to start the algorithm with a near optimal configuration. They use the image histogram information to determine the number of clusters and a cooperative approach involving three metaheuristics, genetic algorithm (GA), firefly algorithm (FA). and biogeography-based optimization algorithm (BBO), to detect the clusters centers in the initialization step. The experimental study shows that, first, the proposed solution determines a near optimal initial clusters centers set leading to good image segmentation compared to well-known methods; second, the number of clusters determined automatically by the proposed approach contributes to improve the image segmentation quality.


Author(s):  
Y. A. Lumban-Gaol ◽  
Z. Chen ◽  
M. Smit ◽  
X. Li ◽  
M. A. Erbaşu ◽  
...  

Abstract. Point cloud data have rich semantic representations and can benefit various applications towards a digital twin. However, they are unordered and anisotropically distributed, thus being unsuitable for a typical Convolutional Neural Networks (CNN) to handle. With the advance of deep learning, several neural networks claim to have solved the point cloud semantic segmentation problem. This paper evaluates three different neural networks for semantic segmentation of point clouds, namely PointNet++, PointCNN and DGCNN. A public indoor scene of the Amersfoort railway station is used as the study area. Unlike the typical indoor scenes and even more from the ubiquitous outdoor ones in currently available datasets, the station consists of objects such as the entrance gates, ticket machines, couches, and garbage cans. For the experiment, we use subsets from the data, remove the noise, evaluate the performance of the selected neural networks. The results indicate an overall accuracy of more than 90% for all the networks but vary in terms of mean class accuracy and mean Intersection over Union (IoU). The misclassification mainly occurs in the classes of couch and garbage can. Several factors that may contribute to the errors are analyzed, such as the quality of the data and the proportion of the number of points per class. The adaptability of the networks is also heavily dependent on the training location: the overall characteristics of the train station make a trained network for one location less suitable for another.


Author(s):  
Kumaran @ Kumar Jayaraman ◽  
Koganti Srilakshmi ◽  
Sasikala Jayaraman

This paper presents a modified flower pollination-based method for performing multilevel segmentation of medical images. The flower pollination-based optimization (FPO) models the pollination process of flowers. Bees serve a major role in the pollination activity of flowers and they memorize and recognize the best flowers producing large pollens of nectar. Such memorizing ability of bees is adapted in the FPO for improving the exploration ability of the algorithm. Besides, the mechanism of avoiding predators by pollinators is also included in the modified FPO (MFPO) for getting away from sub-optimal traps. The medical image segmentation problem is transformed into an optimization problem and solved using the modified FPO (MFPO). The method explores for optimal thresholds in the problem space of the given medical image. The segmented images are presented for showing the superior performance of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document