scholarly journals Biomedical Image Segmentation and Analysis in Deep Learning

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Tuan Anh Tran ◽  
Tien Dung Cao ◽  
Vu-Khanh Tran ◽  
◽  

Biomedical Image Processing, such as human organ segmentation and disease analysis, is a modern field in medicine development and patient treatment. Besides there are many kinds of image formats, the diversity and complexity of biomedical data is still a big issue to all of researchers in their applications. In order to deal with the problem, deep learning give us a successful and effective solutions. Unet and LSTM are two general approaches to the most of case of medical image data. While Unet helps to teach a machine in learning data from each image accompanied with its labelled information, LSTM helps to remember states from many slices of images by times. Unet gives us the segmentation of tumor, abnormal things from biomedical images and then the LSTM gives us the effective diagnosis on a patient disease. In this paper, we show some scenarios of using Unets and LSTM to segment and analysis on many kinds of human organ images and results of brain, retinal, skin, lung and breast segmentation.

Author(s):  
Hao Zheng ◽  
Lin Yang ◽  
Jianxu Chen ◽  
Jun Han ◽  
Yizhe Zhang ◽  
...  

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.


2014 ◽  
Author(s):  
Axel Newe

The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to exchange and present respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scientific communication. This article introduces a new module for the biomedical image processing framework MeVisLab. It enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.


Author(s):  
Mousomi Roy

Computer-aided biomedical data and image analysis is one of the inevitable parts for today's world. A huge dependency can be observed on the computer-aided diagnostic systems to detect and diagnose a disease accurately and within the stipulated amount of time. Big data analysis strategies involve several advanced methods to process big data, such as biomedical images, efficiently and fast. In this work biomedical image analysis techniques from the perception of the big data analytics are studied. Big data and machine learning-based biomedical image analysis is helpful to achieve high accuracy results by maintaining the time constraints. It is also helpful in telemedicine and remote diagnostics where the physical distance of the patient and the domain experts is not a problem. This work can also be helpful in future developments in this domain and also helpful in improving present techniques for biomedical data analysis.


2014 ◽  
Vol 69 (2) ◽  
Author(s):  
Mohammed Sabbih Hamoud Al-Tamimi ◽  
Ghazali Sulong

Developing an efficient algorithm for automated Magnetic Resonance Imaging (MRI) segmentation to characterize tumor abnormalities in an accurate and reproducible manner is ever demanding. This paper presents an overview of the recent development and challenges of the energy minimizing active contour segmentation model called snake for the MRI. This model is successfully used in contour detection for object recognition, computer vision and graphics as well as biomedical image processing including X-ray, MRI and Ultrasound images. Snakes being deformable well-defined curves in the image domain can move under the influence of internal forces and external forces are subsequently derived from the image data. We underscore a critical appraisal of the current status of semi-automated and automated methods for the segmentation of MR images with important issues and terminologies. Advantages and disadvantages of various segmentation methods with salient features and their relevancies are also cited.


2020 ◽  
Author(s):  
Dominik Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

AbstractMotivationDeep learning contributes to uncovering and understanding molecular and cellular processes with highly performant image computing algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate, consistent and fast data processing. However, published algorithms mostly solve only one specific problem and they often require expert skills and a considerable computer science and machine learning background for application.ResultsWe have thus developed a deep learning pipeline called InstantDL for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables experts and non-experts to apply state-of-the-art deep learning algorithms to biomedical image data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows to assess the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible.Availability and ImplementationInstantDL is available under the terms of MIT licence. It can be found on GitHub: https://github.com/marrlab/[email protected]


Author(s):  
Rose Lu ◽  
Dawei Pan

In computer-aided diagnostic technologies, deep convolutional neural image compression classifications are a crucial method. Conventional methods rely primarily on form, colouring, or feature descriptors, and also their configurations, the majority of which would be problem-specific that has been depicted to be supplementary in image data, resulting in a framework that cannot symbolize high problem entities and has poor prototype generalization capability. Emerging Deep Learning (DL) techniques have made it possible to build an end-to-end model, which could potentially general the last detection framework from the raw clinical image dataset. DL methods, on the other hand, suffer from the high computing constraints and costs in analytical modelling and streams owing to the increased mode of accuracy of clinical images and minimal sizes of data. To effectively mitigate these concerns, we provide a techniques and paradigm for DL that blends high-level characteristics generated from a deep network with some classical features in this research. The following stages are involved in constructing the suggested model: Firstly, we supervisedly train a DL model as a coding system, and as a consequence, it could convert raw pixels of medical images into feature extraction, which possibly reflect high-level ideologies for image categorization. Secondly, using image data background information, we derive a collection of conventional characteristics. Lastly, to combine the multiple feature groups produced during the first and second phases, we develop an appropriate method based on deep neural networks. Reference medical imaging datasets are used to assess the suggested method. We get total categorization reliability of 90.1 percent and 90.2 percent, which is greater than existing effective approaches.


2020 ◽  
Vol 10 (4) ◽  
pp. 224
Author(s):  
Amin Zadeh Shirazi ◽  
Eric Fornaciari ◽  
Mark D. McDonnell ◽  
Mahdi Yaghoobi ◽  
Yesenia Cevallos ◽  
...  

In recent years, improved deep learning techniques have been applied to biomedical image processing for the classification and segmentation of different tumors based on magnetic resonance imaging (MRI) and histopathological imaging (H&E) clinical information. Deep Convolutional Neural Networks (DCNNs) architectures include tens to hundreds of processing layers that can extract multiple levels of features in image-based data, which would be otherwise very difficult and time-consuming to be recognized and extracted by experts for classification of tumors into different tumor types, as well as segmentation of tumor images. This article summarizes the latest studies of deep learning techniques applied to three different kinds of brain cancer medical images (histology, magnetic resonance, and computed tomography) and highlights current challenges in the field for the broader applicability of DCNN in personalized brain cancer care by focusing on two main applications of DCNNs: classification and segmentation of brain cancer tumors images.


Author(s):  
Rashmi Kumari ◽  
Shashank Pushkar

Image analysis is giving a huge breakthrough in every field of science and technology. The image is just a collection of pixels and light intensity. The image capturing was done in two ways: (1) by using infrared sensors and (2) by using radiography. The normal images are captured by using the infrared sensors. Radiography uses the various forms of a light family, such as x-ray, gamma rays, etc., to capture the image. The study of neuroimaging is one of the challenging research topics in the field of biomedical image processing. So, from this note, the motivation for this work is to analyze 3D images to detect Alzheimer's disease and compare the statistical results of the whole brain image data with standard doctor's results. The authors also provide a very short implementation for brain slicing and feature extraction using Freesurfer and OpenNeuro dataset.


Sign in / Sign up

Export Citation Format

Share Document