scholarly journals Deep Semi-Supervised Algorithm for Learning Cluster- Oriented Representations of Medical Images Using Partially Observable DICOM Tags and Images

Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1920
Author(s):  
Teo Manojlović ◽  
Ivan Štajduhar

The task of automatically extracting large homogeneous datasets of medical images based on detailed criteria and/or semantic similarity can be challenging because the acquisition and storage of medical images in clinical practice is not fully standardised and can be prone to errors, which are often made unintentionally by medical professionals during manual input. In this paper, we propose an algorithm for learning cluster-oriented representations of medical images by fusing images with partially observable DICOM tags. Pairwise relations are modelled by thresholding the Gower distance measure which is calculated using eight DICOM tags. We trained the models using 30,000 images, and we tested them using a disjoint test set consisting of 8000 images, gathered retrospectively from the PACS repository of the Clinical Hospital Centre Rijeka in 2017. We compare our method against the standard and deep unsupervised clustering algorithms, as well as the popular semi-supervised algorithms combined with the most commonly used feature descriptors. Our model achieves an NMI score of 0.584 with respect to the anatomic region, and an NMI score of 0.793 with respect to the modality. The results suggest that DICOM data can be used to generate pairwise constraints that can help improve medical images clustering, even when using only a small number of constraints.

2016 ◽  
Vol 22 (2) ◽  
pp. 44-56 ◽  
Author(s):  
Jan-Vidar Ølberg ◽  
Morten Goodwin

Abstract Teeth are some of the most resilient tissues of the human body. Because of their placement, teeth often yield intact indicators even when other metrics, such as finger prints and DNA, are missing. Forensics on dental identification is now mostly manual work which is time and resource intensive. Systems for automated human identification from dental X-ray images have the potential to greatly reduce the necessary efforts spent on dental identification, but it requires a system with high stability and accuracy so that the results can be trusted. This paper proposes a new system for automated dental X-ray identification. The scheme extracts tooth and dental work contours from the X-ray images and uses the Hausdorff-distance measure for ranking persons. This combination of state-of-the-art approaches with a novel lowest cost path-based method for separating a dental X-ray image into individual teeth, is able to achieve comparable and better results than what is available in the literature. The proposed scheme is fully functional and is used to accurately identify people within a real dental database. The system is able to perfectly separate 88.7% of the teeth in the test set. Further, in the verification process, the system ranks the correct person in top in 86% of the cases, and among the top five in an astonishing 94% of the cases. The approach has compelling potential to significantly reduce the time spent on dental identification.


Author(s):  
Parul Agarwal ◽  
Shikha Mehta

Subspace clustering approaches cluster high dimensional data in different subspaces. It means grouping the data with different relevant subsets of dimensions. This technique has become very effective as a distance measure becomes ineffective in a high dimensional space. This chapter presents a novel evolutionary approach to a bottom up subspace clustering SUBSPACE_DE which is scalable to high dimensional data. SUBSPACE_DE uses a self-adaptive DBSCAN algorithm to perform clustering in data instances of each attribute and maximal subspaces. Self-adaptive DBSCAN clustering algorithms accept input from differential evolution algorithms. The proposed SUBSPACE_DE algorithm is tested on 14 datasets, both real and synthetic. It is compared with 11 existing subspace clustering algorithms. Evaluation metrics such as F1_Measure and accuracy are used. Performance analysis of the proposed algorithms is considerably better on a success rate ratio ranking in both accuracy and F1_Measure. SUBSPACE_DE also has potential scalability on high dimensional datasets.


Author(s):  
Mohammad Saleh Nambakhsh ◽  
M. Shiva

Exchange of databases between hospitals needs efficient and reliable transmission and storage techniques to cut down the cost of health care. This exchange involves a large amount of vital patient information such as biosignals and medical images. Interleaving one form of data such as 1-D signal over digital images can combine the advantages of data security with efficient memory utilization (Norris, Englehart & Lovely, 2001), but nothing prevents the user from manipulating or copying the decrypted data for illegal uses. Embedding vital information of patients inside their scan images will help physicians make a better diagnosis of a disease. In order to solve these issues, watermark algorithms have been proposed as a way to complement the encryption processes and provide some tools to track the retransmission and manipulation of multimedia contents (Barni, Podilchuk, Bartolini & Delp, 2001; Vallabha, 2003). A watermarking system is based on an imperceptible insertion of a watermark (a signal) in an image. This technique is adapted here for interleaving graphical ECG signals within medical images to reduce storage and transmission overheads as well as helping for computer-aided diagnostics system. In this chapter, we present a new wavelet-based watermarking method combined with the EZW coder. The principle is to replace significant wavelet coefficients of ECG signals by the corresponding significant wavelet coefficients belonging to the host image, which is much bigger in size than the mark signal. This chapter presents a brief introduction to watermarking and the EZW coder that acts as a platform for our watermarking algorithm.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2962 ◽  
Author(s):  
Santiago González Izard ◽  
Ramiro Sánchez Torres ◽  
Óscar Alonso Plaza ◽  
Juan Antonio Juanes Méndez ◽  
Francisco José García-Peñalvo

The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.


2018 ◽  
Vol 27 (2) ◽  
pp. 163-182 ◽  
Author(s):  
Ilanthenral Kandasamy

AbstractNeutrosophy (neutrosophic logic) is used to represent uncertain, indeterminate, and inconsistent information available in the real world. This article proposes a method to provide more sensitivity and precision to indeterminacy, by classifying the indeterminate concept/value into two based on membership: one as indeterminacy leaning towards truth membership and the other as indeterminacy leaning towards false membership. This paper introduces a modified form of a neutrosophic set, called Double-Valued Neutrosophic Set (DVNS), which has these two distinct indeterminate values. Its related properties and axioms are defined and illustrated in this paper. An important role is played by clustering in several fields of research in the form of data mining, pattern recognition, and machine learning. DVNS is better equipped at dealing with indeterminate and inconsistent information, with more accuracy, than the Single-Valued Neutrosophic Set, which fuzzy sets and intuitionistic fuzzy sets are incapable of. A generalised distance measure between DVNSs and the related distance matrix is defined, based on which a clustering algorithm is constructed. This article proposes a Double-Valued Neutrosophic Minimum Spanning Tree (DVN-MST) clustering algorithm, to cluster the data represented by double-valued neutrosophic information. Illustrative examples are given to demonstrate the applications and effectiveness of this clustering algorithm. A comparative study of the DVN-MST clustering algorithm with other clustering algorithms like Single-Valued Neutrosophic Minimum Spanning Tree, Intuitionistic Fuzzy Minimum Spanning Tree, and Fuzzy Minimum Spanning Tree is carried out.


2013 ◽  
Vol 444-445 ◽  
pp. 676-680
Author(s):  
Li Guo ◽  
Guo Feng Liu ◽  
Yu E Bao

In multiple attribute clustering algorithms with uncertain interval numbers, most of the distances between the interval-valued vectors only consider the differences of each interval endpoint ignoring a lot of information. To solve this problem, according to the differences between corresponding points in each interval number, this paper gives a distance formula between interval-valued vectors, extends a FCM clustering algorithm based on interval multiple attribute information. Through an example, we prove the validity and rationality of the algorithm. Keywords: interval-valued vector; FCM clustering algorithm; distance measure; fuzzy partition


2018 ◽  
Author(s):  
Ralf Loritz ◽  
Hoshin Gupta ◽  
Conrad Jackisch ◽  
Martijn Westhoff ◽  
Axel Kleidon ◽  
...  

Abstract. The increasing diversity and resolution of spatially distributed data on terrestrial systems greatly enhances the potential of hydrological modeling. Optimal and parsimonious use of these data sources implies, however, that we better understand (a) which system characteristics exert primary controls on hydrological dynamics and (b) to what level of detail do those characteristics need to be represented in a model. In this study we develop and test an approach to explore these questions that draws upon information theoretic and thermodynamic reasoning, using spatially distributed topographic information as a straightforward example. Specifically, we subdivide a meso-scale catchment into 105 hillslopes and represent each by a two dimensional numerical hillslope model. These hillslope models differ exclusively with respect to topography related parameters derived from a digital elevation model; the remaining setup and meteorological forcing for each are identical. We analyze the degree of similarity of simulated discharge and storage among the hillslopes as a function of time by examining the Shannon information entropy. We furthermore derive a compressed catchment model by clustering the hillslope models into functional groups of similar runoff generation using normalized mutual information as a distance measure. Our results reveal that, within our given model environment, only a portion of the entire amount of topographic information stored within a digital elevation model is relevant for the simulation of distributed runoff and storage dynamics. This manifests through a possible compression of the model ensemble from the entire set of 105 hillslopes to only 6 hillslopes, each representing a different functional group, which leads to no substantial loss in model performance. Importantly, we find that the concept of hydrological similarity is not necessarily time-invariant. On the contrary, the Shannon entropy as measure for diversity in the simulation ensemble shows a distinct annual pattern, with periods of highly redundant simulations, reflecting coherent and organized dynamics, and periods where hillslopes operate in distinctly different ways. We conclude that the proposed approach provides a powerful framework for understanding and diagnosing how and when process organization and functional similarity of hydrological systems emerges in time. Our approach is neither restricted to the model, nor to model targets or the data source we selected in this study. Overall, we propose that the concepts of hydrological systems acting similarly (and thus giving rise to redundancy) or displaying unique functionality (and thus being irreplaceable) are not mutually exclusive. They are in fact of complementary nature, and systems operate by gradually changing to different levels of organization in time.


2020 ◽  
Vol 5 (5) ◽  

Anatomical Neck Triangles are imaginary to some extent. Their significance to many surgical specialties is invaluable. Among all basic Medical sciences subjects, Anatomy is most prone to be forgotten. None of the other subjects has the amount of mnemonics described or invented compared to it. Junior year’s students of Medical schools need to memorize anatomy with no or very little knowledge of its clinical applications. Relatively speaking, that can be quite cumbersome for them compared to those who are already involved in surgical residency training program, when anatomy knowledge is concerned. Surgeons who specialize or exclusively work in a selected anatomic region, they become experts and famous in their field and in that particular operation, mostly because they subconsciously become oriented to that region’s anatomy. However, those who work on various anatomical areas frequently need to refresh their anatomy knowledge. Mnemonics therefore are helpful for various level medical professionals. The Neck represents a relatively limited transition zone or passage of various tissue structures besides great vessels and nerves between Head, Chest and Upper extremities, very much like a three-way connector. Unless the concept of Neck triangles was there, it would have been very difficult to discuss or communicate about neck related procedures. The theory of simulating Neck triangles to a Bird like creature was long thought and utilized by the author. Here we are describing and sharing this imaginary mnemonic to help in the ability of recalling and drawing those triangles. An analogy of a flying Bat is used.


2018 ◽  
Vol 8 (1) ◽  
pp. 154-172 ◽  
Author(s):  
O. Dorgham ◽  
Banan Al-Rahamneh ◽  
Ammar Almomani ◽  
Moh'd Al-Hadidi ◽  
Khalaf F. Khatatneh

Medical image information can be exchanged remotely through cloud-based medical imaging services. Digital Imaging and Communication in Medicine (DICOM) is considered to be the most commonly used medical image format among hospitals. The objective of this article is to enhance the secure transfer and storage of medical images on the cloud by using hybrid encryption algorithms, which are a combination of symmetric encryption algorithms and asymmetric encryption algorithms that make the encryption process faster and more secure. To this end, three different algorithms are chosen to build the framework. These algorithms are simple and suitable for hardware or software implementation because they require low memory and low computational power yet provide high security. Also, security was increased by using a digital signature technique. The results of the analyses showed that for a DICOM file with size 12.5 Mb, 2.957 minutes was required to complete the process. This was totaled from the encryption process took 1.898 minutes, and the decryption process took 1.059 minutes.


Sign in / Sign up

Export Citation Format

Share Document