scholarly journals MRI Image Reconstruction using Compressive Sensing

2019 ◽  
Vol 8 (2) ◽  
pp. 5256-5260

A large number of diagnostic images which also include the MRIs are generated by the imaging departments of the hospitals for medical and legal reasons. This results in the creation of a huge amount of data in the form of images which are required to be stored for a long period. The primary challenge for the picture archiving and communication systems (PACS) allowing to store the image data and the display and reconstruction of the image for recalling at various sites. Image compression and reconstruction are necessary to cope up with these tasks. Significant efforts have been made in the recent towards the application of compressive sensing techniques for acquiring the data in MRI process. The primary aim of the theory of Compressive Sensing (CS) in signal processing is reducing the quantity of data that is acquired for successfully reconstructing the signals. Decreasing the number of coefficients of the acquired images will result in reduced acquisition time i.e. nothing but the duration for which the images are exposed to the MRI apparatus. This paper aims at using optimization algorithms in designing the scanner of the MR integrated with the CS, which results in the reduction of the scan time of the MRI. From a small set of acquired samples, images of satisfactory quality can be obtained. Various Compressive Sensing based optimization algorithms for reconstructing the MRI images are assessed, and a relative comparison is done for further research in this paper.

2011 ◽  
pp. 232-246 ◽  
Author(s):  
Rudy J. Lapeer ◽  
Polydoros Chios ◽  
Alf D. Linney

The introduction of computerized systems in medicine started more than a decade ago. The first applications were mainly focused on archiving and the general database management of patient records with the aim of building fully- integrated Hospital Information Systems (HIS) and fast transfer of data and images (e.g. PACS - Picture Archiving and Communication Systems) between HIS. In parallel with this more general development, specialized computer systems were built to process and enhance image data from such systems as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scanners. The use of enhanced CT and MRI images led to the birth of Image Guided Surgery (IGS). Other terminology for similar concepts has since been used, e.g. Computer- Assisted Surgery (CAS), Computer Integrated Surgery and Therapy (CIST) (Lavallée et al, 1997) and Computer-Assisted Medical Interventions (CAMI). In this chapter, we shall look mainly at Computer-Assisted Surgery (CAS) systems and related systems which are aimed at the training of surgeons and the simulation and planning of surgical interventions. The emphasis will be on the Human-Computer Interaction (HCI) aspect rather than the technological issues of such systems. The latter will be briefly discussed in the next section, to make the reader familiar with the terminology, the history and the current state of the art in CASPIT.


2020 ◽  
Vol 6 (2) ◽  
pp. 115-132
Author(s):  
Kathrin Friedrich

Abstract Since the 1990s Western clinical radiology has been confronted with a fundamental media-induced change - the so-called analogue-digital migration. Film-based diagnostics and archiving of radiological images are transformed into digital interfaces and infrastructures. Networked software applications, namely picture archiving and communication systems (PACS), provide a new basis for processing and displaying image data. The design and implementation of PACS and their (user) interfaces challenged, amongst others, the search for data standards for digital diagnostics. The data format DICOM (Digital Imaging and Communication) was developed to provide the technological basis for encoding image data. Simultaneously, DICOM determines how patients’ bodies are rendered machine-readable and how radiologists are able to gain software-based insights. A main function of DICOM metadata is encoding and continuously actualising patient identification for technological and human actors. A misidentification of image data and specific patient could lead to fatal errors in the furthe+r treatment process. Accordingly, metadata themselves meander between being invisible to the human user and being essential and hence necessarily visible information for diagnostics. Shifting between normativity and fluidity, DICOM metadata enables new practices of radiological diagnostics, which literally bear vital consequences for patients and, on another level, for the profession of radiology. The paper analyses inherent politics and tensions of metadata from a media theoretical point of view by employing the case of the DICOM standard. Based on subject-specific discourses, data models as well as an in-depth examination of exemplary DICOM metadata it shows how (meta)data politics redefine diagnostic infrastructures and routines as well as gain impact on epistemic and aesthetic practices at the turn of the analogue-digital migration.


Author(s):  
R.D. Leapman ◽  
S.B. Andrews

Elemental mapping of biological specimens by electron energy loss spectroscopy (EELS) can be carried out both in the scanning transmission electron microscope (STEM), and in the energy-filtering transmission electron microscope (EFTEM). Choosing between these two approaches is complicated by the variety of specimens that are encountered (e.g., cells or macromolecules; cryosections, plastic sections or thin films) and by the range of elemental concentrations that occur (from a few percent down to a few parts per million). Our aim here is to consider the strengths of each technique for determining elemental distributions in these different types of specimen.On one hand, it is desirable to collect a parallel EELS spectrum at each point in the specimen using the ‘spectrum-imaging’ technique in the STEM. This minimizes the electron dose and retains as much quantitative information as possible about the inelastic scattering processes in the specimen. On the other hand, collection times in the STEM are often limited by the detector read-out and by available probe current. For example, a 256 x 256 pixel image in the STEM takes at least 30 minutes to acquire with read-out time of 25 ms. The EFTEM is able to collect parallel image data using slow-scan CCD array detectors from as many as 1024 x 1024 pixels with integration times of a few seconds. Furthermore, the EFTEM has an available beam current in the µA range compared with just a few nA in the STEM. Indeed, for some applications this can result in a factor of ~100 shorter acquisition time for the EFTEM relative to the STEM. However, the EFTEM provides much less spectral information, so that the technique of choice ultimately depends on requirements for processing the spectrum at each pixel (viz., isolated edges vs. overlapping edges, uniform thickness vs. non-uniform thickness, molar vs. millimolar concentrations).


2021 ◽  
Vol 35 (1) ◽  
pp. 85-91
Author(s):  
Naga Raju Hari Manikyam ◽  
Munisamy Shyamala Devi

In the contemporary era, technological innovations like cloud computing and Internet of Things (IoT) pave way for diversified applications producing multimedia content. Especially large volumes of image data, in medical and other domains, are produced. Cloud infrastructure is widely used to reap benefits such as scalability and availability. However, security and privacy of imagery is in jeopardy when outsourced it to cloud directly. Many compression and encryption techniques came into existence to improve performance and security. Nevertheless, in the wake of emergence of quantum computing in future, there is need for more secure means with multiple transformations of data. Compressive sensing (CS) used in existing methods to improve security. However, most of the schemes suffer from the problem of inability to perform compression and encryption simultaneously besides ending up with large key size. In this paper, we proposed a framework known as Cloud Image Security Framework (CISF) leveraging outsourced image security. The framework has an underlying algorithm known as Hybrid Image Security Algorithm (HISA). It is based on compressive sensing, simultaneous sensing and encryption besides random pixel exchange to ensure multiple transformations of input image. The empirical study revealed that the CISF is more effective, secure with acceptable compression performance over the state of the art methods.


Sign in / Sign up

Export Citation Format

Share Document