scholarly journals A Critical Analysis of Biomedical Image Classification on Deep Learning

Author(s):  
Rose Lu ◽  
Dawei Pan

In computer-aided diagnostic technologies, deep convolutional neural image compression classifications are a crucial method. Conventional methods rely primarily on form, colouring, or feature descriptors, and also their configurations, the majority of which would be problem-specific that has been depicted to be supplementary in image data, resulting in a framework that cannot symbolize high problem entities and has poor prototype generalization capability. Emerging Deep Learning (DL) techniques have made it possible to build an end-to-end model, which could potentially general the last detection framework from the raw clinical image dataset. DL methods, on the other hand, suffer from the high computing constraints and costs in analytical modelling and streams owing to the increased mode of accuracy of clinical images and minimal sizes of data. To effectively mitigate these concerns, we provide a techniques and paradigm for DL that blends high-level characteristics generated from a deep network with some classical features in this research. The following stages are involved in constructing the suggested model: Firstly, we supervisedly train a DL model as a coding system, and as a consequence, it could convert raw pixels of medical images into feature extraction, which possibly reflect high-level ideologies for image categorization. Secondly, using image data background information, we derive a collection of conventional characteristics. Lastly, to combine the multiple feature groups produced during the first and second phases, we develop an appropriate method based on deep neural networks. Reference medical imaging datasets are used to assess the suggested method. We get total categorization reliability of 90.1 percent and 90.2 percent, which is greater than existing effective approaches.

Author(s):  
Hao Zheng ◽  
Lin Yang ◽  
Jianxu Chen ◽  
Jun Han ◽  
Yizhe Zhang ◽  
...  

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Tuan Anh Tran ◽  
Tien Dung Cao ◽  
Vu-Khanh Tran ◽  
◽  

Biomedical Image Processing, such as human organ segmentation and disease analysis, is a modern field in medicine development and patient treatment. Besides there are many kinds of image formats, the diversity and complexity of biomedical data is still a big issue to all of researchers in their applications. In order to deal with the problem, deep learning give us a successful and effective solutions. Unet and LSTM are two general approaches to the most of case of medical image data. While Unet helps to teach a machine in learning data from each image accompanied with its labelled information, LSTM helps to remember states from many slices of images by times. Unet gives us the segmentation of tumor, abnormal things from biomedical images and then the LSTM gives us the effective diagnosis on a patient disease. In this paper, we show some scenarios of using Unets and LSTM to segment and analysis on many kinds of human organ images and results of brain, retinal, skin, lung and breast segmentation.


2020 ◽  
Author(s):  
Dominik Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

AbstractMotivationDeep learning contributes to uncovering and understanding molecular and cellular processes with highly performant image computing algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate, consistent and fast data processing. However, published algorithms mostly solve only one specific problem and they often require expert skills and a considerable computer science and machine learning background for application.ResultsWe have thus developed a deep learning pipeline called InstantDL for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables experts and non-experts to apply state-of-the-art deep learning algorithms to biomedical image data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows to assess the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible.Availability and ImplementationInstantDL is available under the terms of MIT licence. It can be found on GitHub: https://github.com/marrlab/[email protected]


2021 ◽  
Author(s):  
Daniel Padilla ◽  
Hatem A. Rashwan ◽  
Domènec Savi Puig

Deep learning (DL) networks have proven to be crucial in commercial solutions with computer vision challenges due to their abilities to extract high-level abstractions of the image data and their capabilities of being easily adapted to many applications. As a result, DL methodologies had become a de facto standard for computer vision problems yielding many new kinds of research, approaches and applications. Recently, the commercial sector is also driving to use of embedded systems to be able to execute DL models, which has caused an important change on the DL panorama and the embedded systems themselves. Consequently, in this paper, we attempt to study the state of the art of embedded systems, such as GPUs, FPGAs and Mobile SoCs, that are able to use DL techniques, to modernize the stakeholders with the new systems available in the market. Besides, we aim at helping them to determine which of these systems can be beneficial and suitable for their applications in terms of upgradeability, price, deployment and performance.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yiding Wang ◽  
Yuxin Qin ◽  
Jiali Cui

Counting the number of wheat ears in images under natural light is an important way to evaluate the crop yield, thus, it is of great significance to modern intelligent agriculture. However, the distribution of wheat ears is dense, so the occlusion and overlap problem appears in almost every wheat image. It is difficult for traditional image processing methods to solve occlusion problem due to the deficiency of high-level semantic features, while existing deep learning based counting methods did not solve the occlusion efficiently. This article proposes an improved EfficientDet-D0 object detection model for wheat ear counting, and focuses on solving occlusion. First, the transfer learning method is employed in the pre-training of the model backbone network to extract the high-level semantic features of wheat ears. Secondly, an image augmentation method Random-Cutout is proposed, in which some rectangles are selected and erased according to the number and size of the wheat ears in the images to simulate occlusion in real wheat images. Finally, convolutional block attention module (CBAM) is adopted into the EfficientDet-D0 model after the backbone, which makes the model refine the features, pay more attention to the wheat ears and suppress other useless background information. Extensive experiments are done by feeding the features to detection layer, showing that the counting accuracy of the improved EfficientDet-D0 model reaches 94%, which is about 2% higher than the original model, and false detection rate is 5.8%, which is the lowest among comparative methods.


Author(s):  
Abdul Khader Jilani Saudagar

Image processing is widely used in the domain of biomedical engineering especially for compression of clinical images. Clinical diagnosis receives high importance which involves handling patient’s data more accurately and wisely when treating patients remotely. Many researchers proposed different methods for compression of medical images using Artificial Intelligence techniques. Developing efficient automated systems for compression of medical images in telemedicine is the focal point in this paper. Three major approaches were proposed here for medical image compression. They are image compression using neural network, fuzzy logic and neuro-fuzzy logic to preserve higher spectral representation to maintain finer edge information’s, and relational coding for inter band coefficients to achieve high compressions. The developed image coding model is evaluated over various quality factors. From the simulation results it is observed that the proposed image coding system can achieve efficient compression performance compared with existing block coding and JPEG coding approaches, even under resource constraint environments.


2021 ◽  
Author(s):  
Kedir Ali Muhaba ◽  
Kokeb Dese ◽  
Tadele Mola Aga ◽  
Feleke Tilahun Zewdu ◽  
Gizeaddis Lamesgin Simegn

Abstract Background Skin diseases are the fourth most common cause of human illness which results enormous non-fatal burden in daily life activities. They are caused by chemical, physical and biological factors. Visual assessment in combination with clinical information is the common diagnosis procedure for the diseases. However, these procedures are manual, time consuming, and require experience and excellent visual perception. Methods In this study, an automated system is proposed for diagnosis of five common skin diseases by using data from clinical images and patient information using deep learning pretrained mobilenet-v2 model. Clinical images were acquired using different smartphone cameras and patient’s information were collected during patient registration. Different data preprocessing and augmentation techniques were applied to boost the performance of the model prior to training. Results A multiclass classification accuracy of 97.5%, sensitivity of 97.7% and precision of 97.7% has been achieved using the proposed technique for the common five skin disease. The results demonstrate that, the developed system provides excellent diagnosis performance for the five skin diseases. Conclusion The system has been designed as a smartphone application and it has a potential to be used as a decision support system in low resource settings, where both the expert dermatologist and the means is limited.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mohammad Shorfuzzaman ◽  
Mehedi Masud ◽  
Hesham Alhumyani ◽  
Divya Anand ◽  
Aman Singh

The world is experiencing an unprecedented crisis due to the coronavirus disease (COVID-19) outbreak that has affected nearly 216 countries and territories across the globe. Since the pandemic outbreak, there is a growing interest in computational model-based diagnostic technologies to support the screening and diagnosis of COVID-19 cases using medical imaging such as chest X-ray (CXR) scans. It is discovered in initial studies that patients infected with COVID-19 show abnormalities in their CXR images that represent specific radiological patterns. Still, detection of these patterns is challenging and time-consuming even for skilled radiologists. In this study, we propose a novel convolutional neural network- (CNN-) based deep learning fusion framework using the transfer learning concept where parameters (weights) from different models are combined into a single model to extract features from images which are then fed to a custom classifier for prediction. We use gradient-weighted class activation mapping to visualize the infected areas of CXR images. Furthermore, we provide feature representation through visualization to gain a deeper understanding of the class separability of the studied models with respect to COVID-19 detection. Cross-validation studies are used to assess the performance of the proposed models using open-access datasets containing healthy and both COVID-19 and other pneumonia infected CXR images. Evaluation results show that the best performing fusion model can attain a classification accuracy of 95.49% with a high level of sensitivity and specificity.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sign in / Sign up

Export Citation Format

Share Document