scholarly journals COVID-CLNet; COVID-19 Detection with Compressive Deep Learning Approaches

One of the most serious global health threats is COVID-19 pandemic. The emphasis on increasing the diagnostic capability helps stopping its spread significantly. Therefore, to assist the radiologist or other medical professional to detect and identify the COVID-19 cases in the shortest possible time, we propose a computer-aided detection (CADe) system that uses the computed tomography (CT) scan images. This proposed boosted deep learning network (CLNet) is based on the implementation of Deep Learning (DL) networks as a complementary to the Compressive Learning (CL). We utilize our inception feature extraction technique in the measurement domain using CL to represent the data features into a new space with less dimensionality before accessing the Convolutional Neural Network. All original features have been contributed equally to the new space using a sensing matrix. Experiments performed on different compressed methods show promising results for COVID-19 detection.

Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1593 ◽  
Author(s):  
Yanlei Gu ◽  
Huiyang Zhang ◽  
Shunsuke Kamijo

Image based human behavior and activity understanding has been a hot topic in the field of computer vision and multimedia. As an important part, skeleton estimation, which is also called pose estimation, has attracted lots of interests. For pose estimation, most of the deep learning approaches mainly focus on the joint feature. However, the joint feature is not sufficient, especially when the image includes multi-person and the pose is occluded or not fully visible. This paper proposes a novel multi-task framework for the multi-person pose estimation. The proposed framework is developed based on Mask Region-based Convolutional Neural Networks (R-CNN) and extended to integrate the joint feature, body boundary, body orientation and occlusion condition together. In order to further improve the performance of the multi-person pose estimation, this paper proposes to organize the different information in serial multi-task models instead of the widely used parallel multi-task network. The proposed models are trained on the public dataset Common Objects in Context (COCO), which is further augmented by ground truths of body orientation and mutual-occlusion mask. Experiments demonstrate the performance of the proposed method for multi-person pose estimation and body orientation estimation. The proposed method can detect 84.6% of the Percentage of Correct Keypoints (PCK) and has an 83.7% Correct Detection Rate (CDR). Comparisons further illustrate the proposed model can reduce the over-detection compared with other methods.


2020 ◽  
Author(s):  
Ezenwoko Benson ◽  
Lukas Rier ◽  
Isawan Millican ◽  
Sue Pritchard ◽  
Carolyn Costigan ◽  
...  

ABSTRACTColonic volume content measurements can provide important information about the digestive tract physiology. Development of automated analyses will accelerate the translation of these measurements into clinical practice. In this paper, we test the effect of data dimension on the success of deep learning approaches to segment colons from MRI data. Deep learning network models were developed which used either 2D slices, complete 3D volumes and 2.5D partial volumes. These represent variations in the trade-off between the size and complexity of a network and its training regime, and the limitation of only being able to use a small section of the data at a time: full 3D networks, for example, have more image context available for decision making but require more powerful hardware to implement. For the datasets utilised here, 3D data was found to outperform 2.5D data, which in turn performed better than 2D datasets. The maximum Dice scores achieved by the networks were 0.898, 0.834 and 0.794 respectively. We also considered the effect of ablating varying amounts of data on the ability of the networks to label images correctly. We achieve dice scores of 0.829, 0.827 and 0.389 for 3D single slices ablation, 3D multi-slice ablation and 2.5D middle slice ablation.In addition, we examined another practical consideration of deep learning, that of how well a network performs on data from another acquisition device. Networks trained on images from a Philips Achieva MRI system yielded Dice scores of up to 0.77 in the 3D case when tested on images captured from a GE Medical Systems HDxt (both 1.5 Tesla) without any retraining. We also considered the effect of single versus multimodal MRI data showing that single modality dice scores can be boosted from 0.825 to 0.898 when adding an extra modality.


Author(s):  
A. Kala ◽  
S. Ganesh Vaidyanathan

Rainfall forecasting is the most critical and challenging task because of its dependence on different climatic and weather parameters. Hence, robust and accurate rainfall forecasting models need to be created by applying various machine learning and deep learning approaches. Several automatic systems were created to predict the weather, but it depends on the type of weather pattern, season and location, which leads in maximizing the processing time. Therefore, in this work, significant artificial algae long short-term memory (LSTM) deep learning network is introduced to forecast the monthly rainfall. During this process, Homogeneous Indian Monthly Rainfall Data Set (1871–2016) is utilized to collect the rainfall information. The gathered information is computed with the help of an LSTM approach, which is able to process the time series data and predict the dependency between the data effectively. The most challenging phase of LSTM training process is finding optimal network parameters such as weight and bias. For obtaining the optimal parameters, one of the Meta heuristic bio-inspired algorithms called Artificial Algae Algorithm (AAA) is used. The forecasted rainfall for the testing dataset is compared with the existing models. The forecasted results exhibit superiority of our model over the state-of-the-art models for forecasting Indian Monsoon rainfall. The LSTM model combined with AAA predicts the monsoon from June–September accurately.


2020 ◽  
Author(s):  
Ilya Belevich ◽  
Eija Jokitalo

AbstractDeep learning approaches are highly sought after solutions for coping with large amounts of collected datasets and are expected to become an essential part of imaging workflows. However, in most cases, deep learning is still considered as a complex task that only image analysis experts can master. DeepMIB addresses this problem and provides the community with a user-friendly and open-source tool to train convolutional neural networks and apply them to segment 2D and 3D light and electron microscopy datasets.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

Sign in / Sign up

Export Citation Format

Share Document