Advances in Medical Technologies and Clinical Practice - Biomedical Diagnostics and Clinical Technologies
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781605662800, 9781605662817

Author(s):  
Thomas V. Kilindris ◽  
Kiki Theodorou

Patient anatomy, biochemical response, as well functional evaluation at organ level, are key fields that produce a significant amount of multi modal information during medical diagnosis. Visualization, processing, and storage of the acquired data sets are essential tasks in everyday medical practice. In order to perform complex processing that involves or rely on image data a robust as well versatile data structure was used as extension of the Visualization Toolkit (VTK). The proposed structure serves as a universal registration container for acquired information and post processed resulted data. The structure is a dynamic multidimensional data holder to host several modalities and/or Meta data like fused image sets, extracted features (volumetric, surfaces, edges) providing a universal coordinate system used for calculations and geometric processes. A case study of Treatment Planning System (TPS) in the stereotactic radiotherapy (RT) based on the proposed structure is discussed as an efficient medical application.



Author(s):  
Xiu Ying Wang ◽  
Dagan Feng

The rapid advance and innovation in medical imaging techniques offer significant improvement in healthcare services, as well as provide new challenges in medical knowledge discovery from multi-imaging modalities and management. In this chapter, biomedical image registration and fusion, which is an effective mechanism to assist medical knowledge discovery by integrating and simultaneously representing relevant information from diverse imaging resources, is introduced. This chapter covers fundamental knowledge and major methodologies of biomedical image registration, and major applications of image registration in biomedicine. Further, discussions on research perspectives are presented to inspire novel registration ideas for general clinical practice to improve the quality and efficiency of healthcare.



Author(s):  
Ana Leiria ◽  
M. M. M. Moura

A broad view on the analysis of Doppler embolic signals is presented, uniting physics, engineering and computing, and clinical aspects. The overview of the field discusses the physiological significance of emboli and Doppler ultrasound with particular attention given to Transcranial Doppler; an outline of high-performance computing is presented, disambiguating the terminology and concepts used thereafter. The presentation of the major diagnostic approaches to Doppler embolic signals focuses on the most significant methods and techniques used to detect and classify embolic events including the clinical relevancy. Coverage of estimators such as time-frequency, time-scale, and displacement-frequency is included. The discussion of current approaches targets areas of identified need for improvement. A brief historical perspective of high-performance computing of Doppler blood flow signals and particularly Doppler embolic signals is accompanied by the reasoning behind the technological trends and approaches. The final remarks include, as a conclusion, a summary of the contribution and as future trends, some pathways hinting to where new developments might be expected.



Author(s):  
Pedro Tomás ◽  
IST TU Lisbon ◽  
Aleksandar Ilic ◽  
Leonel Sousa

When analyzing the neuronal code, neuroscientists usually perform extra-cellular recordings of neuronal responses (spikes). Since the size of the microelectrodes used to perform these recordings is much larger than the size of the cells, responses from multiple neurons are recorded by each micro-electrode. Thus, the obtained response must be classified and evaluated, in order to identify how many neurons were recorded, and to assess which neuron generated each spike. A platform for the mass-classification of neuronal responses is proposed in this chapter, employing data-parallelism for speeding up the classification of neuronal responses. The platform is built in a modular way, supporting multiple web-interfaces, different back-end environments for parallel computing or different algorithms for spike classification. Experimental results on the proposed platform show that even for an unbalanced data set of neuronal responses the execution time was reduced of about 45%. For balanced data sets, the platform may achieve a reduction in execution time equal to the inverse of the number of back-end computational elements.



Author(s):  
Constantino Carlos Reyes-Aldasoro ◽  
Abhir Bhalerao

In recent years, the development of new and powerful image acquisition techniques has lead to a shift from purely qualitative observation of biomedical images towards more a quantitative examination of the data, which linked with statistical analysis and mathematical modeling has provided more interesting and solid results than the purely visual monitoring of an experiment. The resolution of the imaging equipment has increased considerably and the data provided in many cases is not just a simple image, but a three-dimensional volume. Texture provides interesting information that can characterize anatomical regions or cell populations whose intensities may not be different enough to discriminate between them. This chapter presents a tutorial on volumetric texture analysis. The chapter begins with different definitions of texture together with a literature review focused on the medical and biological applications of texture. A review of texture extraction techniques follows, with a special emphasis on the analysis of volumetric data and examples to visualize the techniques. By the end of the chapter, a review of advantages and disadvantages of all techniques is presented together with some important considerations regarding the classification of the measurement space.



Author(s):  
Olivier Bockenbach ◽  
Michael Knaup ◽  
Sven Steckmann ◽  
Marc Kachelrieß

Commonly used in medical imaging for diagnostic purposes, in luggage scanning, as well as in industrial non-destructive testing applications, Computed Tomography (CT) is an imaging technique that provides cross sections of an object from measurements taken from different angular positions around the object. CT, also referred to as Image Reconstruction (IR), is known to be a very compute-intensive problem. In its simplest form, the computational load is a function of O(M × N3), where M represents the number of measurements taken around the object and N is the dimension of the object. Furthermore, research institutes report that the increase in processing power required by CT is consistently above Moore‘s Law. On the other hand, the changing work flow in hospital requires obtaining CT images faster with better quality from lower dose. In some cases, real time is needed. High Performance Image Reconstruction (HPIR) has to be used to match the performance requirements involved by the use of modern CT reconstruction algorithms in hospitals. Traditionally, this problem had been solved by the design of specific hardware. Nowadays, the evolution of technology makes it possible to use Components of the Shelf (COTS). Typical HPIR platforms can be built around multicore processors such as the Cell Broadband Engine (CBE), General-Purpose Graphics Processing Units (GPGPU) or Field Programmable Gate Arrays (FPGA). These platforms exhibit different level in the parallelism required to implement CT reconstruction algorithms. They also have different properties in the way the computation can be carried out, potentially requiring drastic changes in the way an algorithm can be implemented. Furthermore, because of their COTS nature, it is not always easy to take the best advantages of a given platform and compromises have to be made. Finally, a fully fleshed reconstruction platform also includes the data acquisition interface as well as the vizualisation of the reconstructed slices. These parts are the area of excellence of FPGAs and GPGPUs. However, more often then not, the processing power available in those units exceeds the requirement of a given pipeline and the remaining real estate and processing power can be used for the core of the reconstruction pipeline. Indeed, several design options can be considered for a given algorithm with yet another set of compromises.



Author(s):  
T. Heida ◽  
R. Moroney ◽  
E. Marani

Deep Brain Stimulation (DBS) is effective in the Parkinsonian state, while it seems to produce rather non-selective stimulation over an unknown volume of tissue. Despite a huge amount of anatomical and physiological data regarding the structure of the basal ganglia (BG) and their connections, the computational processes performed by the basal ganglia in health and disease still remain unclear. Its hypothesized roles are discussed in this chapter as well as the changes that are observed under pathophysiological conditions. Several hypotheses exist in explaining the mechanism by which DBS provides its beneficial effects. Computational models of the BG span a range of structural levels, from low-level membrane conductance-based models of single neurons to high level system models of the complete BG circuit. A selection of models is presented in this chapter. This chapter aims at explaining how models of neurons and connected brain nuclei contribute to the understanding of DBS.



Author(s):  
Przemyslaw Lenkiewicz ◽  
Manuela Pereira ◽  
Mário M. Freire ◽  
José Fernandes

This chapter contains a survey of the most popular techniques for medical image segmentation that have been gaining attention of the researchers and medical practitioners since the early 1980s until present time. Those methods are presented in chronological order along with their most important features, examples of the results that they can bring and examples of application. They are also grouped into three generations, each of them representing a significant evolution in terms of algorithms’ novelty and obtainable results compared to the previous one. This survey helps to understand what have been the main ideas standing behind respective segmentation methods and how were they limited by the available technology. In the following part of this chapter several of promising, recent methods are evaluated and compared based on a selection of important features. Together with the survey from the first section this serves to show which are the directions currently taken by researchers and which of them have the potential to be successful.



Author(s):  
Filipe Soares ◽  
Mário M. Freire ◽  
Manuela Pereira ◽  
Filipe Janela ◽  
João Seabra

The improvement on Computer Aided Detection (CAD) systems has reached the point where it is offered extremely valuable information to the clinician, for the detection and classification of abnormalities at the earliest possible stage. This chapter covers the rapidly growing development of self-similarity models that can be applied to problems of fundamental significance, like the Breast Cancer detection through Digital Mammography. The main premise of this work was related to the fact that human tissue is characterized by a high degree of self-similarity, and that property has been found in medical images of breasts, through a qualitative appreciation of the existing self-similarity nature, by analyzing their fluctuations at different resolutions. There is no need to image pattern comparison in order to recognize the presence of cancer features. One just has to compare the self-similarity factor of the detected features that can be a new attribute for classification. In this chapter, the mostly used methods for self-similarity analysis and image segmentation are presented and explained. The self-similarity measure can be an excellent aid to evaluate cancer features, giving an indication to the radiologist diagnosis.



Author(s):  
Frédéric Payan ◽  
Marc Antonini

The modelling of three-dimensional (3D) objects with triangular meshes represents a major interest for medical imagery. Indeed, visualization and handling of 3D representations of biological objects (like organs for instance) are very helpful for clinical diagnosis, telemedicine applications, or clinical research in general. Today, the increasing resolution of imaging equipments leads to densely sampled triangular meshes, but the resulting data are consequently huge. In this chapter, we present one specific lossy compression algorithm for such meshes that could be used in medical imagery. According to several state-of-the-art techniques, this scheme is based on wavelet filtering, and an original bit allocation process that optimizes the quantization of the data. This allocation process is the core of the algorithm, because it allows the users to always get the optimal trade-off between the quality of the compressed mesh and the compression ratio, whatever the user-given bitrate. By the end of the chapter, experimental results are discussed and compared with other approaches.



Sign in / Sign up

Export Citation Format

Share Document