scholarly journals Data segmentation algorithms: Univariate mean change and beyond

Author(s):  
Haeran Cho ◽  
Claudia Kirchs
2021 ◽  
Vol 2068 (1) ◽  
pp. 012025
Author(s):  
Jian Zheng ◽  
Zhaoni Li ◽  
Jiang Li ◽  
Hongling Liu

Abstract It is difficult to detect the anomalies in big data using traditional methods due to big data has the characteristics of mass and disorder. For the common methods, they divide big data into several small samples, then analyze these divided small samples. However, this manner increases the complexity of segmentation algorithms, moreover, it is difficult to control the risk of data segmentation. To address this, here proposes a neural network approch based on Vapnik risk model. Firstly, the sample data is randomly divided into small data blocks. Then, a neural network learns these divided small sample data blocks. To reduce the risks in the process of data segmentation, the Vapnik risk model is used to supervise data segmentation. Finally, the proposed method is verify on the historical electricity price data of Mountain View, California. The results show that our method is effectiveness.


2020 ◽  
Vol 64 (4) ◽  
pp. 40412-1-40412-11
Author(s):  
Kexin Bai ◽  
Qiang Li ◽  
Ching-Hsin Wang

Abstract To address the issues of the relatively small size of brain tumor image datasets, severe class imbalance, and low precision in existing segmentation algorithms for brain tumor images, this study proposes a two-stage segmentation algorithm integrating convolutional neural networks (CNNs) and conventional methods. Four modalities of the original magnetic resonance images were first preprocessed separately. Next, preliminary segmentation was performed using an improved U-Net CNN containing deep monitoring, residual structures, dense connection structures, and dense skip connections. The authors adopted a multiclass Dice loss function to deal with class imbalance and successfully prevented overfitting using data augmentation. The preliminary segmentation results subsequently served as the a priori knowledge for a continuous maximum flow algorithm for fine segmentation of target edges. Experiments revealed that the mean Dice similarity coefficients of the proposed algorithm in whole tumor, tumor core, and enhancing tumor segmentation were 0.9072, 0.8578, and 0.7837, respectively. The proposed algorithm presents higher accuracy and better stability in comparison with some of the more advanced segmentation algorithms for brain tumor images.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Jimena Olveres ◽  
Erik Carbajal-Degante ◽  
Boris Escalante-Ramírez ◽  
Enrique Vallejo ◽  
Carla María García-Moreno

Segmentation tasks in medical imaging represent an exhaustive challenge for scientists since the image acquisition nature yields issues that hamper the correct reconstruction and visualization processes. Depending on the specific image modality, we have to consider limitations such as the presence of noise, vanished edges, or high intensity differences, known, in most cases, as inhomogeneities. New algorithms in segmentation are required to provide a better performance. This paper presents a new unified approach to improve traditional segmentation methods as Active Shape Models and Chan-Vese model based on level set. The approach introduces a combination of local analysis implementations with classic segmentation algorithms that incorporates local texture information given by the Hermite transform and Local Binary Patterns. The mixture of both region-based methods and local descriptors highlights relevant regions by considering extra information which is helpful to delimit structures. We performed segmentation experiments on 2D images including midbrain in Magnetic Resonance Imaging and heart’s left ventricle endocardium in Computed Tomography. Quantitative evaluation was obtained with Dice coefficient and Hausdorff distance measures. Results display a substantial advantage over the original methods when we include our characterization schemes. We propose further research validation on different organ structures with promising results.


Author(s):  
P. Salgado ◽  
T.-P. Azevedo Perdicoúlis

Medical image techniques are used to examine and determine the well-being of the foetus during pregnancy. Digital image processing (DIP) is essential to extract valuable information embedded in most biomedical signals. After, intelligent segmentation methods, based on classifier algorithms, must be applied to identify structures and relevant features from previous data. The success of both is essential for helping doctors to identify adverse health conditions from the medical images. To obtain easy and reliable DIP methods for foetus images in real-time, at different gestational ages, aware pre-processing needs to be applied to the images. Thence, some data features are extracted that are meant to be used as input to the segmentation algorithms presented in this work. Due to the high dimension of the problems in question, assemblage of the data is also desired. The segmentation of the images is done by revisiting the K-nn algorithm that is a conventional nonparametric classifier. Besides its simplicity, its power to accomplish high classification results in medical applications has been demonstrated. In this work two versions of this algorithm are presented (i) an enhancement of the standard version by aggregating the data apriori and (ii) an iterative version of the same method where the training set (TS) is not static. The procedure is demonstrated in two experiments, where two images of different technologies were selected: a magnetic resonance image and an ultrasound image, respectively. The results were assessed by comparison with the K-means clustering algorithm, a well-known and robust method for this type of task. Both described versions showed results close to 100% matching with the ones obtained by the validation method, although the iterative version displays much higher reliability in the classification.


2021 ◽  
Vol 3 (5) ◽  
Author(s):  
João Gaspar Ramôa ◽  
Vasco Lopes ◽  
Luís A. Alexandre ◽  
S. Mogo

AbstractIn this paper, we propose three methods for door state classification with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work offline, in low-powered computers as the Jetson Nano, in real-time with the ability to differentiate between open, closed and semi-open doors. We use the 3D object classification, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classification networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classification algorithm running in real-time on a low-power device.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Kassim S. Mwitondi ◽  
Isaac Munyakazi ◽  
Barnabas N. Gatsheni

Abstract In the light of the recent technological advances in computing and data explosion, the complex interactions of the Sustainable Development Goals (SDG) present both a challenge and an opportunity to researchers and decision makers across fields and sectors. The deep and wide socio-economic, cultural and technological variations across the globe entail a unified understanding of the SDG project. The complexity of SDGs interactions and the dynamics through their indicators align naturally to technical and application specifics that require interdisciplinary solutions. We present a consilient approach to expounding triggers of SDG indicators. Illustrated through data segmentation, it is designed to unify our understanding of the complex overlap of the SDGs by utilising data from different sources. The paper treats each SDG as a Big Data source node, with the potential to contribute towards a unified understanding of applications across the SDG spectrum. Data for five SDGs was extracted from the United Nations SDG indicators data repository and used to model spatio-temporal variations in search of robust and consilient scientific solutions. Based on a number of pre-determined assumptions on socio-economic and geo-political variations, the data is subjected to sequential analyses, exploring distributional behaviour, component extraction and clustering. All three methods exhibit pronounced variations across samples, with initial distributional and data segmentation patterns isolating South Africa from the remaining five countries. Data randomness is dealt with via a specially developed algorithm for sampling, measuring and assessing, based on repeated samples of different sizes. Results exhibit consistent variations across samples, based on socio-economic, cultural and geo-political variations entailing a unified understanding, across disciplines and sectors. The findings highlight novel paths towards attaining informative patterns for a unified understanding of the triggers of SDG indicators and open new paths to interdisciplinary research.


Sign in / Sign up

Export Citation Format

Share Document