scholarly journals LEVEL SETS AND COMPUTATIONAL INTELLIGENCE ALGORITHMS TO MEDICAL IMAGE ANALYSIS IN E-MEDICUS SYSTEM

Author(s):  
Tomasz Rymarczyk

In this work, there were implemented methods to analyze and segmentation medical images by using topological, statistical algorithms and artificial intelligence techniques. The solution shows the architecture of the system collecting and analyzing data. There was tried to develop an algorithm for level set method (LSM) applied to piecewise constant image segmentation. These algorithms are needed to identify arbitrary number of phases for the segmentation problem. The image segmentation refers to the process of partitioning a digital image into multiple regions. There is typically used to locate objects and boundaries in images. There was also shown an algorithm for analyzing medical images using a neural network MLP.

2014 ◽  
Vol 513-517 ◽  
pp. 3750-3756 ◽  
Author(s):  
Yuan Zheng Ma ◽  
Jia Xin Chen

The traditional segmentation method for medical image segmentation is difficult to achieve the accuracy requirement, and when the edges of the image are blurred, it will occurs incomplete segmentation problem, in order to solve this problem, we propose a medical image segmentation method which based on Chan-Vese model and mathematical morphology. The method integrates Chan-Vese model, mathematical morphology, composite multiphase level sets segmentation algorithm, first, through iterative etching operation to extract the outline of the medical image, and then the medical image is segmented by the Chan-Vese model based on the complex multiphase level sets, finally the medical image image is dilated iteratively by using morphological dilation to restore the image. The experimental results and analysis show that, this method improves the multi-region segmentation accuracy during the segmentation of medical image and solves the problem of incomplete segmentation.


Author(s):  
Tomasz Rymarczyk

In this work, there were implemented methods to analyze and segmentation medical images by using different kind of algorithms. The solution shows the architecture of the system collecting and analyzing data. There was tried to develop an algorithm for level set method applied to piecewise constant image segmentation. These algorithms are needed to identify arbitrary number of phases for the segmentation problem. With the use of modern algorithms, it can obtain a quicker diagnosis and automatically marking areas of the interest region in medical images.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 115 ◽  
Author(s):  
Myung Jae Lim ◽  
Da Eun Kim ◽  
Dong Kun Chung ◽  
Hoon Lim ◽  
Young Man Kwon

Breast cancer is a highly contagious disease that has killed many people all over the world. It can be fully recovered from early detection. To enable the early detection of the breast cancer, it is very important to classify accurately whether it is breast cancer or not. Recently, the deep learning approach method on the medical images such as these histopathologic images of the breast cancer is showing higher level of accuracy and efficiency compared to the conventional methods. In this paper, the breast cancer histopathological image that is difficult to be distinguished was analyzed visually. And among the deep learning algorithms, the CNN(Convolutional Neural Network) specialized for the image was used to perform comparative analysis on whether it is breast cancer or not. Among the CNN algorithms, VGG16 and InceptionV3 were used, and transfer learning was used for the effective application of these algorithms.The data used in this paper is breast cancer histopathological image dataset classifying the benign and malignant of BreakHis. In the 2-class classification task, InceptionV3 achieved 98% accuracy. It is expected that this deep learning approach method will support the development of disease diagnosis through medical images.  


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Lin Teng ◽  
Hang Li ◽  
Shahid Karim

Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.


Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1230
Author(s):  
Xiaofei Qin ◽  
Chengzi Wu ◽  
Hang Chang ◽  
Hao Lu ◽  
Xuedian Zhang

Medical image segmentation is a fundamental task in medical image analysis. Dynamic receptive field is very helpful for accurate medical image segmentation, which needs to be further studied and utilized. In this paper, we propose Match Feature U-Net, a novel, symmetric encoder– decoder architecture with dynamic receptive field for medical image segmentation. We modify the Selective Kernel convolution (a module proposed in Selective Kernel Networks) by inserting a newly proposed Match operation, which makes similar features in different convolution branches have corresponding positions, and then we replace the U-Net’s convolution with the redesigned Selective Kernel convolution. This network is a combination of U-Net and improved Selective Kernel convolution. It inherits the advantages of simple structure and low parameter complexity of U-Net, and enhances the efficiency of dynamic receptive field in Selective Kernel convolution, making it an ideal model for medical image segmentation tasks which often have small training data and large changes in targets size. Compared with state-of-the-art segmentation methods, the number of parameters in Match Feature U-Net (2.65 M) is 34% of U-Net (7.76 M), 29% of UNet++ (9.04 M), and 9.1% of CE-Net (29 M). We evaluated the proposed architecture in four medical image segmentation tasks: nuclei segmentation in microscopy images, breast cancer cell segmentation, gland segmentation in colon histology images, and disc/cup segmentation. Our experimental results show that Match Feature U-Net achieves an average Mean Intersection over Union (MIoU) gain of 1.8, 1.45, and 2.82 points over U-Net, UNet++, and CE-Net, respectively.


2006 ◽  
Vol 326-328 ◽  
pp. 875-878
Author(s):  
Jae Bum An ◽  
Li Li Xin

In this paper we present an analysis of medical images based on robot kinematics. One of the most important problems in robot-assisted surgeries is associated with the medical image registration of surgical tools and anatomical targets. The fundamental problems of contemporary frame-based image registration are that the registration fails in case of incomplete data in the image and the registration algorithm depends on the shape, assembly, and number of fiducials. To solve the registration problem in the situation where a cylindrical end-effector of surgical robots operates inside the patient’s body, we developed a numerical method by applying robot kinematics knowledge to cross-sectional medical images. Our method includes a 6-D registration algorithm and a cylindrical frame with four helix and one straight line fiducials. The numerical algorithm requires only a single cross-sectional image and are robust to noise and missing data, and are algorithmically invariant to the actual shape, number, and assembly of fiducials. The algorithm and frame are introduced in this paper, and simulation results are described to show the adequate accuracy and resistance to noise.


2013 ◽  
Vol 760-762 ◽  
pp. 1552-1555 ◽  
Author(s):  
Jing Jing Wang ◽  
Xiao Wei Song ◽  
Mei Fang

Image segmentation in medical image processing has been extensively used which has also been applied in different fields of medicine to assist doctors to make the correct judgment and grasp the patient's condition. However, nowadays there are no image threshold segmentation techniques that can be applied to all of the medical images; so it has became a challenging problem. In this paper, it applies a method of identifying edge of the tissues and organs to recognize its contour, and then selects a number of seed points on the contour range to locate the cancer area by region growing. And finally, the result has demonstrated that this method can mostly locate the cancer area accurately.


2019 ◽  
Vol 9 (2) ◽  
pp. 251-260
Author(s):  
Fakhre Alam ◽  
Sami UR Rahman ◽  
Nasser Tairan ◽  
Habib Shah ◽  
Mohammed Saeed Abohashrh ◽  
...  

Accurate and efficient image registration, based on interested common sub-regions is still a challenging task in medical image analysis. This paper presents an automatic features based approach for the rigid and deformable registration of medical images using interested common sub-regions. In the proposed approach, interested common sub-regions in two images (target image and source image) are automatically detected and locally registered. The final global registration is performed, using the transformation parameters obtained from the local registration. Registration using interested common sub-regions is always required in image guided surgery (IGS) and other medical procedures because it considers only the desired objects in medical images instead of the whole image contents. The proposed interested common sub-regions based registration is compared with the two states-of-the-art methods on MR images of human brain. In the experiments of rigid and deformable registrations, we show that our approach outperforms in terms of both the accuracy and time efficiency. The results reveal that interested common sub-region based registration can achieve good performance, regarding both the accuracy as well as the the time efficiency in monomodal brain image registration. In addition, the proposed approach also indicates the potential for multimodal images of different human organs.


Sign in / Sign up

Export Citation Format

Share Document