scholarly journals Detection and Labeling of Vertebrae using Deep Learning

2020 ◽  
Vol 9 (1) ◽  
pp. 2788-2791

Inspection, Classification and localization of artificial vertebrae from random CT images is difficult. Normally vertebrates have a similar morphological appearance. Owing to anatomy and hence the subjective field of view of CT scans, the presence of any anchor vertebrae or parametric methods for defining the looks and form can hardly be believed. They suggest a robust and effective method for recognizing and localizing vertebrae that can automatically learn to use both the short range and long-range conceptual information in a controlled manner. Combine a fully convolutionary neural network with an instance memory that preserves information on already segmented vertebrae. This network analyzes image patches iteratively, using the instance memory to scan for and segment the not yet segmented primary vertebra. Every vertebra is measured as wholly or partly at an equal period. This study uses an over dimensional sample of 865 disc-levels from 1115 patients.

2021 ◽  
Vol 15 ◽  
Author(s):  
Majid Dherar Younus ◽  
Mohammad J M Zedan ◽  
Fahad Layth Malallah ◽  
Mustafa Ghanem Saeed

Background: Coronavirus (COVID-19) has appeared first time in Wuhan, China, as an acute respiratory syndrome and spread rapidly. It has been declared a pandemic by the WHO. Thus, there is an urgent need to develop an accurate computer-aided method to assist clinicians in identifying COVID-19-infected patients by computed tomography CT images. The contribution of this paper is that it proposes a pre-processing technique that increases the recognition rate compared to the techniques existing in the literature. Methods: The proposed pre-processing technique, which consists of both contrast enhancement and open-morphology filter, is highly effective in decreasing the diagnosis error rate. After carrying out pre-processing, CT images are fed to a 15-layer convolution neural network (CNN) as deep-learning for the training and testing operations. The dataset used in this research has been publically published, in which CT images were collected from hospitals in Sao Paulo, Brazil. This dataset is composed of 2482 CT scans images, which include 1252 CT scans of SARS-CoV-2 infected patients and 1230 CT scans of non-infected SARS-CoV-2 patients. Results: The proposed detection method achieves up to 97.8% accuracy, which outperforms the reported accuracy of the dataset by 97.3%. Conclusion: The performance in terms of accuracy has been improved up to 0.5% by the proposed methodology over the published dataset and its method.


2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


2020 ◽  
Vol 21 (S6) ◽  
Author(s):  
Jianqiang Li ◽  
Guanghui Fu ◽  
Yueda Chen ◽  
Pengzhi Li ◽  
Bo Liu ◽  
...  

Abstract Background Screening of the brain computerised tomography (CT) images is a primary method currently used for initial detection of patients with brain trauma or other conditions. In recent years, deep learning technique has shown remarkable advantages in the clinical practice. Researchers have attempted to use deep learning methods to detect brain diseases from CT images. Methods often used to detect diseases choose images with visible lesions from full-slice brain CT scans, which need to be labelled by doctors. This is an inaccurate method because doctors detect brain disease from a full sequence scan of CT images and one patient may have multiple concurrent conditions in practice. The method cannot take into account the dependencies between the slices and the causal relationships among various brain diseases. Moreover, labelling images slice by slice spends much time and expense. Detecting multiple diseases from full slice brain CT images is, therefore, an important research subject with practical implications. Results In this paper, we propose a model called the slice dependencies learning model (SDLM). It learns image features from a series of variable length brain CT images and slice dependencies between different slices in a set of images to predict abnormalities. The model is necessary to only label the disease reflected in the full-slice brain scan. We use the CQ500 dataset to evaluate our proposed model, which contains 1194 full sets of CT scans from a total of 491 subjects. Each set of data from one subject contains scans with one to eight different slice thicknesses and various diseases that are captured in a range of 30 to 396 slices in a set. The evaluation results present that the precision is 67.57%, the recall is 61.04%, the F1 score is 0.6412, and the areas under the receiver operating characteristic curves (AUCs) is 0.8934. Conclusion The proposed model is a new architecture that uses a full-slice brain CT scan for multi-label classification, unlike the traditional methods which only classify the brain images at the slice level. It has great potential for application to multi-label detection problems, especially with regard to the brain CT images.


Diagnostics ◽  
2019 ◽  
Vol 9 (4) ◽  
pp. 207 ◽  
Author(s):  
Dana Li ◽  
Bolette Mikela Vilmun ◽  
Jonathan Frederik Carlsen ◽  
Elisabeth Albrecht-Beste ◽  
Carsten Ammitzbøl Lauridsen ◽  
...  

The aim of this study was to systematically review the performance of deep learning technology in detecting and classifying pulmonary nodules on computed tomography (CT) scans that were not from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database. Furthermore, we explored the difference in performance when the deep learning technology was applied to test datasets different from the training datasets. Only peer-reviewed, original research articles utilizing deep learning technology were included in this study, and only results from testing on datasets other than the LIDC-IDRI were included. We searched a total of six databases: EMBASE, PubMed, Cochrane Library, the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Scopus, and Web of Science. This resulted in 1782 studies after duplicates were removed, and a total of 26 studies were included in this systematic review. Three studies explored the performance of pulmonary nodule detection only, 16 studies explored the performance of pulmonary nodule classification only, and 7 studies had reports of both pulmonary nodule detection and classification. Three different deep learning architectures were mentioned amongst the included studies: convolutional neural network (CNN), massive training artificial neural network (MTANN), and deep stacked denoising autoencoder extreme learning machine (SDAE-ELM). The studies reached a classification accuracy between 68–99.6% and a detection accuracy between 80.6–94%. Performance of deep learning technology in studies using different test and training datasets was comparable to studies using same type of test and training datasets. In conclusion, deep learning was able to achieve high levels of accuracy, sensitivity, and/or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i709-i717
Author(s):  
Wenjing Xuan ◽  
Ning Liu ◽  
Neng Huang ◽  
Yaohang Li ◽  
Jianxin Wang

Abstract Motivation Determining the structures of proteins is a critical step to understand their biological functions. Crystallography-based X-ray diffraction technique is the main method for experimental protein structure determination. However, the underlying crystallization process, which needs multiple time-consuming and costly experimental steps, has a high attrition rate. To overcome this issue, a series of in silico methods have been developed with the primary aim of selecting the protein sequences that are promising to be crystallized. However, the predictive performance of the current methods is modest. Results We propose a deep learning model, so-called CLPred, which uses a bidirectional recurrent neural network with long short-term memory (BLSTM) to capture the long-range interaction patterns between k-mers amino acids to predict protein crystallizability. Using sequence only information, CLPred outperforms the existing deep-learning predictors and a vast majority of sequence-based diffraction-quality crystals predictors on three independent test sets. The results highlight the effectiveness of BLSTM in capturing non-local, long-range inter-peptide interaction patterns to distinguish proteins that can result in diffraction-quality crystals from those that cannot. CLPred has been steadily improved over the previous window-based neural networks, which is able to predict crystallization propensity with high accuracy. CLPred can also be improved significantly if it incorporates additional features from pre-extracted evolutional, structural and physicochemical characteristics. The correctness of CLPred predictions is further validated by the case studies of Sox transcription factor family member proteins and Zika virus non-structural proteins. Availability and implementation https://github.com/xuanwenjing/CLPred.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lingling Han ◽  
Yue Chen ◽  
Weidong Cheng ◽  
He Bai ◽  
Jian Wang ◽  
...  

Objective. This study aimed to optimize the CT images of anal fistula patients using a convolutional neural network (CNN) algorithm to investigate the anal function recovery. Methods. 57 patients with complex anal fistulas admitted to our hospital from January 2020 to February 2021 were selected as research subjects. Of them, CT images of 34 cases were processed using the deep learning neural network, defined as the experimental group, and the remaining unprocessed 23 cases were in the control group. Whether to process CT images depended on the patient’s own wish. The imaging results were compared with the results observed during the surgery. Results. It was found that, in the experimental group, the images were clearer, with DSC = 0.89, precision = 0.98, and recall = 0.87, indicating that the processing effects were good; that the CT imaging results in the experimental group were more consistent with those observed during the surgery, and the difference was notable ( P < 0.05 ). Furthermore, the experimental group had lower RP (mmHg), AMCP (mmHg) scores, and postoperative recurrence rate, with notable differences noted ( P < 0.05 ). Conclusion. CT images processed by deep learning are clearer, leading to higher accuracy of preoperative diagnosis, which is suggested in clinics.


Multiple medical images of different modalities are fused together to generate a new more informative image thereby reducing the treatment planning time of medical practitioners. In recent years, wavelets and deep learning methods have been widely used in various image processing applications. In this study, we present convolutional neural network and wavelet based fusion of MR and CT images of lumber spine to generate a single image which comprises all the important features of MR and CT images. Both CT and MR images are first decomposed into detail and approximate coefficients using wavelets. Then the corresponding detail and approximate coefficients are fused using convolutional neural network framework. Inverse wavelet transform is then used to generate fused image. The experimental results indicate that the proposed approach achieves good performance as compared to conventional methods


2021 ◽  
Vol 11 ◽  
Author(s):  
Ge Ren ◽  
Sai-kit Lam ◽  
Jiang Zhang ◽  
Haonan Xiao ◽  
Andy Lai-yin Cheung ◽  
...  

Functional lung avoidance radiation therapy aims to minimize dose delivery to the normal lung tissue while favoring dose deposition in the defective lung tissue based on the regional function information. However, the clinical acquisition of pulmonary functional images is resource-demanding, inconvenient, and technically challenging. This study aims to investigate the deep learning-based lung functional image synthesis from the CT domain. Forty-two pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. A deep learning-based framework (including image preparation, image processing, and proposed convolutional neural network) was adopted to extract features from 3D CT images and synthesize perfusion as estimations of regional lung function. Ablation experiments were performed to assess the effects of each framework component by removing each element of the framework and analyzing the testing performances. Major results showed that the removal of the CT contrast enhancement component in the image processing resulted in the largest drop in framework performance, compared to the optimal performance (~12%). In the CNN part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing one of them resulted in a 3–5% decline in performance. The proposed CNN improved ~4% overall performance and ~350% computational efficiency, compared to the U-Net model. The deep convolutional neural network, in conjunction with image processing for feature enhancement, is capable of feature extraction from CT images for pulmonary perfusion synthesis. In the proposed framework, image processing, especially CT contrast enhancement, plays a crucial role in the perfusion synthesis. This CTPM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for functional lung avoidance radiation therapy.


Sign in / Sign up

Export Citation Format

Share Document