biomedical image processing
Recently Published Documents


TOTAL DOCUMENTS

123
(FIVE YEARS 29)

H-INDEX

9
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-18
Author(s):  
Muhammad Arif ◽  
F. Ajesh ◽  
Shermin Shamsudheen ◽  
Oana Geman ◽  
Diana Izdrui ◽  
...  

Radiology is a broad subject that needs more knowledge and understanding of medical science to identify tumors accurately. The need for a tumor detection program, thus, overcomes the lack of qualified radiologists. Using magnetic resonance imaging, biomedical image processing makes it easier to detect and locate brain tumors. In this study, a segmentation and detection method for brain tumors was developed using images from the MRI sequence as an input image to identify the tumor area. This process is difficult due to the wide variety of tumor tissues in the presence of different patients, and, in most cases, the similarity within normal tissues makes the task difficult. The main goal is to classify the brain in the presence of a brain tumor or a healthy brain. The proposed system has been researched based on Berkeley’s wavelet transformation (BWT) and deep learning classifier to improve performance and simplify the process of medical image segmentation. Significant features are extracted from each segmented tissue using the gray-level-co-occurrence matrix (GLCM) method, followed by a feature optimization using a genetic algorithm. The innovative final result of the approach implemented was assessed based on accuracy, sensitivity, specificity, coefficient of dice, Jaccard’s coefficient, spatial overlap, AVME, and FoM.


Author(s):  
An-Wen Deng ◽  
Chih-Ying Gwo

3D Zernike moments based on 3D Zernike polynomials have been successfully applied to the field of voxelized 3D shape retrieval and have attracted more attention in biomedical image processing. As the order of 3D Zernike moments increases, both computational efficiency and numerical accuracy decrease. Due to this phenomenon, a more efficient and stable method for computing high-order 3D Zernike moments was proposed in this study. The proposed recursive formula for computing 3D Zernike radial polynomials combines the recursive calculation of spherical harmonics to develop a voxel-based algorithm for the calculation of 3D Zernike moments. The algorithm was applied to the 3D shape Michelangelo's David with a size of 150×150×150 voxels. As compared to the method without additional acceleration, the proposed method uses a group action of order sixteen orthogonal group and saving unnecessary iterations, the factor of speed-up is 56.783±3.999 when the order of Zernike moments is between 10 and 450. The proposed method also obtained an accurate reconstructed shape with the error rate (normalized mean square error) of 0.00 (4.17×10^-3) when the reconstruction was computed for all moments up to order 450.


2021 ◽  
Author(s):  
Radhika Malhotra ◽  
Jasleen Saini ◽  
Barjinder Singh Saini ◽  
Savita Gupta

In the past decade, there has been a remarkable evolution of convolutional neural networks (CNN) for biomedical image processing. These improvements are inculcated in the basic deep learning-based models for computer-aided detection and prognosis of various ailments. But implementation of these CNN based networks is highly dependent on large data in case of supervised learning processes. This is needed to tackle overfitting issues which is a major concern in supervised techniques. Overfitting refers to the phenomenon when a network starts learning specific patterns of the input such that it fits well on the training data but leads to poor generalization abilities on unseen data. The accessibility of enormous quantity of data limits the field of medical domain research. This paper focuses on utility of data augmentation (DA) techniques, which is a well-recognized solution to the problem of limited data. The experiments were performed on the Brain Tumor Segmentation (BraTS) dataset which is available online. The results signify that different DA approaches have upgraded the accuracies for segmenting brain tumor boundaries using CNN based model.


2021 ◽  
Vol 12 (2) ◽  
pp. 93-110
Author(s):  
Garv Modwel ◽  
Anu Mehra ◽  
Nitin Rakesh ◽  
K. K. Mishra

The human vision system is mimicked in the format of videos and images in the area of computer vision. As humans can process their memories, likewise video and images can be processed and perceptive with the help of computer vision technology. There is a broad range of fields that have great speculation and concepts building in the area of application of computer vision, which includes automobile, biomedical, space research, etc. The case study in this manuscript enlightens one about the innovation and future scope possibilities that can start a new era in the biomedical image-processing sector. A pre-surgical investigation can be perused with the help of the proposed technology that will enable the doctors to analyses the situations with deeper insight. There are different types of biomedical imaging such as magnetic resonance imaging (MRI), computerized tomographic (CT) scan, x-ray imaging. The focused arena of the proposed research is x-ray imaging in this subset. As it is always error-prone to do an eyeball check for a human when it comes to the detailing. The same applied to doctors. Subsequently, they need different equipment for related technologies. The methodology proposed in this manuscript analyses the details that may be missed by an expert doctor. The input to the algorithm is the image in the format of x-ray imaging; eventually, the output of the process is a label on the corresponding objects in the test image. The tool used in the process also mimics the human brain neuron system. The proposed method uses a convolutional neural network to decide on the labels on the objects for which it interprets the image. After some pre-processing the x-ray images, the neural network receives the input to achieve an efficient performance. The result analysis is done that gives a considerable performance in terms of confusion factor that is represented in terms of percentage. At the end of the narration of the manuscript, future possibilities are being traces out to the limelight to conduct further research.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4126
Author(s):  
Cătălin Daniel Căleanu ◽  
Cristina Laura Sîrbu ◽  
Georgiana Simion

Computer vision, biomedical image processing and deep learning are related fields with a tremendous impact on the interpretation of medical images today. Among biomedical image sensing modalities, ultrasound (US) is one of the most widely used in practice, since it is noninvasive, accessible, and cheap. Its main drawback, compared to other imaging modalities, like computed tomography (CT) or magnetic resonance imaging (MRI), consists of the increased dependence on the human operator. One important step toward reducing this dependence is the implementation of a computer-aided diagnosis (CAD) system for US imaging. The aim of the paper is to examine the application of contrast enhanced ultrasound imaging (CEUS) to the problem of automated focal liver lesion (FLL) diagnosis using deep neural networks (DNN). Custom DNN designs are compared with state-of-the-art architectures, either pre-trained or trained from scratch. Our work improves on and broadens previous work in the field in several aspects, e.g., a novel leave-one-patient-out evaluation procedure, which further enabled us to formulate a hard-voting classification scheme. We show the effectiveness of our models, i.e., 88% accuracy reported against a higher number of liver lesion types: hepatocellular carcinomas (HCC), hypervascular metastases (HYPERM), hypovascular metastases (HYPOM), hemangiomas (HEM), and focal nodular hyperplasia (FNH).


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Tuan Anh Tran ◽  
Tien Dung Cao ◽  
Vu-Khanh Tran ◽  
◽  

Biomedical Image Processing, such as human organ segmentation and disease analysis, is a modern field in medicine development and patient treatment. Besides there are many kinds of image formats, the diversity and complexity of biomedical data is still a big issue to all of researchers in their applications. In order to deal with the problem, deep learning give us a successful and effective solutions. Unet and LSTM are two general approaches to the most of case of medical image data. While Unet helps to teach a machine in learning data from each image accompanied with its labelled information, LSTM helps to remember states from many slices of images by times. Unet gives us the segmentation of tumor, abnormal things from biomedical images and then the LSTM gives us the effective diagnosis on a patient disease. In this paper, we show some scenarios of using Unets and LSTM to segment and analysis on many kinds of human organ images and results of brain, retinal, skin, lung and breast segmentation.


Sign in / Sign up

Export Citation Format

Share Document