data segmentation
Recently Published Documents


TOTAL DOCUMENTS

252
(FIVE YEARS 81)

H-INDEX

16
(FIVE YEARS 4)

Author(s):  
Andreas Leibetseder ◽  
Klaus Schoeffmann ◽  
Jörg Keckstein ◽  
Simon Keckstein

AbstractEndometriosis is a common gynecologic condition typically treated via laparoscopic surgery. Its visual versatility makes it hard to identify for non-specialized physicians and challenging to classify or localize via computer-aided analysis. In this work, we take a first step in the direction of localized endometriosis recognition in laparoscopic gynecology videos using region-based deep neural networks Faster R-CNN and Mask R-CNN. We in particular use and further develop publicly available data for transfer learning deep detection models according to distinctive visual lesion characteristics. Subsequently, we evaluate the performance impact of different data augmentation techniques, including selected geometrical and visual transformations, specular reflection removal as well as region tracking across video frames. Finally, particular attention is given to creating reasonable data segmentation for training, validation and testing. The best performing result surprisingly is achieved by randomly applying simple cropping combined with rotation, resulting in a mean average segmentation precision of 32.4% at 50-95% intersection over union overlap (64.2% for 50% overlap).


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Xi Guan ◽  
Guang Yang ◽  
Jianming Ye ◽  
Weiji Yang ◽  
Xiaomei Xu ◽  
...  

Abstract Background Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent, which seriously endangers human health. The main method of acquiring brain tumors in the clinic is MRI. Segmentation of brain tumor regions from multi-modal MRI scan images is helpful for treatment inspection, post-diagnosis monitoring, and effect evaluation of patients. However, the common operation in clinical brain tumor segmentation is still manual segmentation, lead to its time-consuming and large performance difference between different operators, a consistent and accurate automatic segmentation method is urgently needed. With the continuous development of deep learning, researchers have designed many automatic segmentation algorithms; however, there are still some problems: (1) The research of segmentation algorithm mostly stays on the 2D plane, this will reduce the accuracy of 3D image feature extraction to a certain extent. (2) MRI images have gray-scale offset fields that make it difficult to divide the contours accurately. Methods To meet the above challenges, we propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet. In our study, the Squeeze and Excite (SE) module is added to each encoder, the Attention Guide Filter (AG) module is added to each decoder, using the channel relationship to automatically enhance the useful information in the channel to suppress the useless information, and use the attention mechanism to guide the edge information and remove the influence of irrelevant information such as noise. Results We used the BraTS2020 challenge online verification tool to evaluate our approach. The focus of verification is that the Dice scores of the whole tumor, tumor core and enhanced tumor are 0.68, 0.85 and 0.70, respectively. Conclusion Although MRI images have different intensities, AGSE-VNet is not affected by the size of the tumor, and can more accurately extract the features of the three regions, it has achieved impressive results and made outstanding contributions to the clinical diagnosis and treatment of brain tumor patients.


2021 ◽  
Vol 0 (0) ◽  
pp. 1-34
Author(s):  
Fang-Jun Zhu ◽  
Lu-Juan Zhou ◽  
Mi Zhou ◽  
Feng Pei

In the Chinese stock market, the unique special treatment (ST) warning mechanism can signal financial distress for listed companies. In existing studies, classification model has been developed to differentiate the two general listing states. However, this classification model cannot explain the internal changes of each listing state. Considering that the requirement of the withdrawal of ST in the mechanism is relatively loose, we propose a new segmentation approach for Chinese listed companies, which are divided into negative companies and positive companies according to the number of times being labeled ST. Under the framework of data mining, we use financial indicators, non-financial indicators, and time series to build a financial distress prediction model of distinguishing the long-term development of different Chinese listed companies. Through data segmentation, we find that the negative samples have a huge destructive interference on the prediction effect of the total sample. On the contrary, positive companies improve the prediction accuracy in all aspects and the optimal feature set is also different from all companies. The main contribution of the paper is to analyze the internal impact of the deterioration of financial distress prediction in time series and construct an optimization model for positive companies.


2021 ◽  
pp. 69-74
Author(s):  
Олександр Дмитрович Абрамов ◽  
Юлія Володимирівна С’єдіна ◽  
Андрій Юрійович Ніколаєв ◽  
Артем Андрійович Бондарєв

The article deals with the technology of estimating the frequency of harmonic components in the presence of additive normal interferences for solving applied problems of spectral analysis. Objective: to develop a methodology for the synthesis of algorithms for determining the frequency of a complex harmonic signal in discrete sections of the process, this is observed when using data segmentation. Objective: to develop the optimal technology for determining the frequency of the hormonal component of the process, provided by a finite number of discrete compartments, according to model representations and requirements that meet the problems of the current state of spectral analysis practice. These results were obtained. The problem of estimating the harmonic frequency from segmented data in the presence of additive Gaussian interference in observations based on the method of maximum likelihood is solved. The processing algorithm and the consequences of digital modeling of the synthesized evaluation technology for a given number of discrete process samples are given. The analysis of both the practical capacity of the technology for determining the assessment and certain qualitative indicators of assessment is performed. Conclusions. The scientific novelty of the obtained results is as follows: further development as a method for solving problems of estimating the frequency of the harmonic signal from a few sample values of the process under conditions of additive normal interference and methods for optimizing the structure of digital processing of observations in data segmentation. The synthesized technology uses one sample of observations to determine the estimates, which ensure the efficiency of information processing in a simple software implementation. The use of segmentation in the technological process of digital processing of observations allows obtaining estimates, the quality of which corresponds to the indicators of maximum likelihood. For unambiguous assessment, there is a need to eliminate ambiguity. Under these conditions, the technology with a given number of samples can significantly solve the range of signal-to-noise ratios at which can be obtained unbiased estimates.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012025
Author(s):  
Jian Zheng ◽  
Zhaoni Li ◽  
Jiang Li ◽  
Hongling Liu

Abstract It is difficult to detect the anomalies in big data using traditional methods due to big data has the characteristics of mass and disorder. For the common methods, they divide big data into several small samples, then analyze these divided small samples. However, this manner increases the complexity of segmentation algorithms, moreover, it is difficult to control the risk of data segmentation. To address this, here proposes a neural network approch based on Vapnik risk model. Firstly, the sample data is randomly divided into small data blocks. Then, a neural network learns these divided small sample data blocks. To reduce the risks in the process of data segmentation, the Vapnik risk model is used to supervise data segmentation. Finally, the proposed method is verify on the historical electricity price data of Mountain View, California. The results show that our method is effectiveness.


Sign in / Sign up

Export Citation Format

Share Document