scholarly journals Segmentation of infected region in CT images of COVID-19 patients based on QC-HC U-net

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Qin Zhang ◽  
Xiaoqiang Ren ◽  
Benzheng Wei

AbstractSince the outbreak of COVID-19 in 2019, the rapid spread of the epidemic has brought huge challenges to medical institutions. If the pathological region in the COVID-19 CT image can be automatically segmented, it will help doctors quickly determine the patient’s infection, thereby speeding up the diagnosis process. To be able to automatically segment the infected area, we proposed a new network structure and named QC-HC U-Net. First, we combine residual connection and dense connection to form a new connection method and apply it to the encoder and the decoder. Second, we choose to add Hypercolumns in the decoder section. Compared with the benchmark 3D U-Net, the improved network can effectively avoid vanishing gradient while extracting more features. To improve the situation of insufficient data, resampling and data enhancement methods are selected in this paper to expand the datasets. We used 63 cases of MSD lung tumor data for training and testing, continuously verified to ensure the training effect of this model, and then selected 20 cases of public COVID-19 data for training and testing. Experimental results showed that in the segmentation of COVID-19, the specificity and sensitivity were 85.3% and 83.6%, respectively, and in the segmentation of MSD lung tumors, the specificity and sensitivity were 81.45% and 80.93%, respectively, without any fitting.

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Zhou Tao ◽  
Huo Bingqiang ◽  
Lu Huiling ◽  
Yang Zaoli ◽  
Shi Hongbin

Nonnegative sparse representation has become a popular methodology in medical analysis and diagnosis in recent years. In order to resolve network degradation, higher dimensionality in feature extraction, data redundancy, and other issues faced when medical images parameters are trained using convolutional neural networks. Lung tumors in chest CT image based on nonnegative, sparse, and collaborative representation classification of DenseNet (DenseNet-NSCR) are proposed by this paper: firstly, initialization parameters of pretrained DenseNet model using transfer learning; secondly, training DenseNet using CT images to extract feature vectors for the full connectivity layer; thirdly, a nonnegative, sparse, and collaborative representation (NSCR) is used to represent the feature vector and solve the coding coefficient matrix; fourthly, the residual similarity is used for classification. The experimental results show that the DenseNet-NSCR classification is better than the other models, and the various evaluation indexes such as specificity and sensitivity are also high, and the method has better robustness and generalization ability through comparison experiment using AlexNet, GoogleNet, and DenseNet-201 models.


2014 ◽  
Vol 644-650 ◽  
pp. 4233-4236
Author(s):  
Zhen You Zhang ◽  
Guo Huan Lou

Segmentation algorithm of CT Image is discussed in this paper. Dynamic relative fuzzy region growing algorithm is used for CT. At the beginning of the segmentation, the confidence interval region growing algorithm is used. The overlapping parts in the initial segmentation result is segmented again with the improved fuzzy connected, and then determine which region the overlapping parts belong to. Thus, the final segmentation result can be obtained. Since the algorithm contains the advantages of region growing algorithm, fuzzy connected algorithm and the region competition, the runtime of segmentation is greatly reduced and better experimental results are obtained.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 93 ◽  
Author(s):  
Zhenrong Deng ◽  
Rui Yang ◽  
Rushi Lan ◽  
Zhenbing Liu ◽  
Xiaonan Luo

Small scale face detection is a very difficult problem. In order to achieve a higher detection accuracy, we propose a novel method, termed SE-IYOLOV3, for small scale face in this work. In SE-IYOLOV3, we improve the YOLOV3 first, in which the anchorage box with a higher average intersection ratio is obtained by combining niche technology on the basis of the k-means algorithm. An upsampling scale is added to form a face network structure that is suitable for detecting dense small scale faces. The number of prediction boxes is five times more than the YOLOV3 network. To further improve the detection performance, we adopt the SENet structure to enhance the global receptive field of the network. The experimental results on the WIDERFACEdataset show that the IYOLOV3 network embedded in the SENet structure can significantly improve the detection accuracy of dense small scale faces.


2020 ◽  
Vol 10 (5) ◽  
pp. 1729 ◽  
Author(s):  
Yuning Jiang ◽  
Jinhua Li

Objective: Super-resolution reconstruction is an increasingly important area in computer vision. To alleviate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results, we propose a novel and improved algorithm. Methods: This paper presented TSRGAN (Super-Resolution Generative Adversarial Networks Combining Texture Loss) model which was also based on generative adversarial networks. We redefined the generator network and discriminator network. Firstly, on the network structure, residual dense blocks without excess batch normalization layers were used to form generator network. Visual Geometry Group (VGG)19 network was adopted as the basic framework of discriminator network. Secondly, in the loss function, the weighting of the four loss functions of texture loss, perceptual loss, adversarial loss and content loss was used as the objective function of generator. Texture loss was proposed to encourage local information matching. Perceptual loss was enhanced by employing the features before activation layer to calculate. Adversarial loss was optimized based on WGAN-GP (Wasserstein GAN with Gradient Penalty) theory. Content loss was used to ensure the accuracy of low-frequency information. During the optimization process, the target image information was reconstructed from different angles of high and low frequencies. Results: The experimental results showed that our method made the average Peak Signal to Noise Ratio of reconstructed images reach 27.99 dB and the average Structural Similarity Index reach 0.778 without losing too much speed, which was superior to other comparison algorithms in objective evaluation index. What is more, TSRGAN significantly improved subjective visual evaluations such as brightness information and texture details. We found that it could generate images with more realistic textures and more accurate brightness, which were more in line with human visual evaluation. Conclusions: Our improvements to the network structure could reduce the model’s calculation amount and stabilize the training direction. In addition, the loss function we present for generator could provide stronger supervision for restoring realistic textures and achieving brightness consistency. Experimental results prove the effectiveness and superiority of TSRGAN algorithm.


2014 ◽  
Vol 5 (1) ◽  
pp. 68-73 ◽  
Author(s):  
Changsheng Ma ◽  
Jianping Cao ◽  
Yong Yin ◽  
Jian Zhu

2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Huiyan Jiang ◽  
Baochun He ◽  
Zhiyuan Ma ◽  
Mao Zong ◽  
Xiangrong Zhou ◽  
...  

A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.


2013 ◽  
Vol 423-426 ◽  
pp. 2570-2575
Author(s):  
Guang Shuai Liu ◽  
Bai Lin Li

How to effectively extract contour dominant points is one of key problems in process of industrial CT image, second extraction method was put forward. Second extraction method included two steps: rough extraction and accurate extraction. Firstly, discrete circular curvatures of contour points are calculated. Secondly, through rough extraction step, bad points and points which arent correlated with contour features were removed. At last, through accurate extraction step, contour dominant points were extracted by levels of detail. Experimental results show that contour dominant points can describe contours shape and redundant data are removed, the proposed method is simple and efficient.


2020 ◽  
Author(s):  
dongshen ji ◽  
yanzhong zhao ◽  
zhujun zhang ◽  
qianchuan zhao

In view of the large demand for new coronary pneumonia covid19 image recognition samples,the recognition accuracy is not ideal.In this paper,a new coronary pneumonia positive image recognition method proposed based on small sample recognition. First, the CT image pictures are preprocessed, and the pictures are converted into the picture formats which are required for transfer learning. Secondly, perform small-sample image enhancement and expansion on the converted picture, such as miscut transformation, random rotation and translation, etc.. Then, multiple migration models are used to extract features and then perform feature fusion. Finally,the model is adjusted by fine-tuning.Then train the model to obtain experimental results. The experimental results show that our method has excellent recognition performance in the recognition of new coronary pneumonia images,even with only a small number of CT image samples.


Author(s):  
Shuo Cheng ◽  
Guohui Zhou

Because the shallow neural network has limited ability to represent complex functions with limited samples and calculation units, its generalization ability will be limited when it comes to complex classification problems. The essence of deep learning is to learn a nonlinear network structure, to represent input data distributed representation and demonstrate a powerful ability to learn deeper features of data from a small set of samples. In order to realize the accurate classification of expression images under normal conditions, this paper proposes an expression recognition model of improved Visual Geometry Group (VGG) deep convolutional neural network (CNN). Based on the VGG-19, the model optimizes network structure and network parameters. Most expression databases are unable to train the entire network from the start due to lack of sufficient data. This paper uses migration learning techniques to overcome the shortage of image training samples. Shallow CNN, Alex-Net and improved VGG-19 deep CNN are used to train and analyze the facial expression data on the Extended Cohn–Kanade expression database, and compare the experimental results obtained. The experimental results indicate that the improved VGG-19 network model can achieve 96% accuracy in facial expression recognition, which is obviously superior to the results of other network models.


Sign in / Sign up

Export Citation Format

Share Document