scholarly journals Reliability of contour-based volume calculation for radiosurgery

2012 ◽  
Vol 117 (Special_Suppl) ◽  
pp. 203-210 ◽  
Author(s):  
Lijun Ma ◽  
Arjun Sahgal ◽  
Ke Nie ◽  
Andrew Hwang ◽  
Aliaksandr Karotki ◽  
...  

Object Determining accurate target volume is critical for both prescribing and evaluating stereotactic radiosurgery (SRS) treatments. The aim of this study was to determine the reliability of contour-based volume calculations made by current major SRS platforms. Methods Spheres ranging in diameter from 6.4 to 38.2 mm were scanned and then delineated on imaging studies. Contour data sets were subsequently exported to 6 SRS treatment-planning platforms for volume calculations and comparisons. This procedure was repeated for the case of a patient with 12 metastatic lesions distributed throughout the brain. Both the phantom and patient datasets were exported to a stand-alone workstation for an independent volume-calculation analysis using a series of 10 algorithms that included approaches such as slice stacking, surface meshing, point-cloud filling, and so forth. Results Contour data–rendered volumes exhibited large variations across the current SRS platforms investigated for both the phantom (−3.6% to 22%) and patient case (1.0%–10.2%). The majority of the clinical SRS systems and algorithms overestimated the volumes of the spheres, compared with their known physical volumes. An independent algorithm analysis found a similar trend in variability, and large variations were typically associated with small objects whose volumes were < 0.4 cm3 and with those objects located near the end-slice of the scan limits. Conclusions Significant variations in volume calculation were observed based on data obtained from the SRS systems that were investigated. This observation highlights the need for strict quality assurance and benchmarking efforts when commissioning SRS systems for clinical use and, moreover, when conducting multiinstitutional cross-SRS platform clinical studies.

2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


2017 ◽  
pp. 8-17
Author(s):  
A. A. Ermakova ◽  
O. Yu. Borodin ◽  
M. Yu. Sannikov ◽  
S. D. Koval ◽  
V. Yu. Usov

Purpose: to investigate the diagnostic opportunities of contrast  magnetic resonance imaging with the effect of magnetization transfer effect in the diagnosis of focal metastatic lesions in the brain.Materials and methods.Images of contrast MRI of the brain of 16  patients (mean age 49 ± 18.5 years) were analysed. Diagnosis of  the direction is focal brain lesion. All MRI studies were carried out  using the Toshiba Titan Octave with magnetic field of 1.5 T. The  contrast agent is “Magnevist” at concentration of 0.2 ml/kg was  used. After contrasting process two T1-weighted studies were  performed: without T1-SE magnetization transfer with parameters of pulse: TR = 540 ms, TE = 12 ms, DFOV = 24 sm, MX = 320 × 224  and with magnetization transfer – T1-SE-MTC with parameters of pulse: ΔF = −210 Hz, FA(МТС) = 600°, TR = 700 ms, TE = 10 ms,  DFOV = 23.9 sm, MX = 320 x 224. For each detected metastatic  lesion, a contrast-to-brain ratio (CBR) was calculated. Comparative  analysis of CBR values was carried out using a non-parametric  Wilcoxon test at a significance level p < 0.05. To evaluate the  sensitivity and specificity of the techniques in the detection of  metastatic foci (T1-SE and T1-SE-MTC), ROC analysis was used. The sample is divided into groups: 1 group is foci ≤5 mm in size, 2  group is foci from 6 to 10 mm, and 3 group is foci >10 mm. Results.Comparative analysis of CBR using non-parametric Wilcoxon test showed that the values of the CBR on T1-weighted  images with magnetization transfer are significantly higher (p  <0.001) that on T1-weighted images without magnetization transfer. According to the results of the ROC analysis, sensitivity in detecting  metastases (n = 90) in the brain on T1-SE-MTC and T1-SE was  91.7% and 81.6%, specificity was 100% and 97.6%, respectively.  The accuracy of the T1-SE-MTC is 10% higher in comparison with  the technique without magnetization transfer. Significant differences (p < 0.01) between the size of the foci detected in post-contrast T1- weighted images with magnetization transfer and in post-contrast  T1-weighted images without magnetization transfer, in particular for  foci ≤5 mm in size, were found. Conclusions1. Comparative analysis of CBR showed significant (p < 0.001)  increase of contrast between metastatic lesion and white matter on  T1-SE-MTC in comparison with T1-SE.2. The sensitivity, specificity and accuracy of the magnetization transfer program (T1-SE-MTC) in detecting foci of  metastatic lesions in the brain is significantly higher (p < 0.01), relative to T1-SE.3. The T1-SE-MTC program allows detecting more foci in comparison with T1-SE, in particular foci of ≤5 mm (96% and 86%, respectively, with p < 0.05).


Biophysica ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 38-47
Author(s):  
Arturo Tozzi ◽  
James F. Peters ◽  
Norbert Jausovec ◽  
Arjuna P. H. Don ◽  
Sheela Ramanna ◽  
...  

The nervous activity of the brain takes place in higher-dimensional functional spaces. It has been proposed that the brain might be equipped with phase spaces characterized by four spatial dimensions plus time, instead of the classical three plus time. This suggests that global visualization methods for exploiting four-dimensional maps of three-dimensional experimental data sets might be used in neuroscience. We asked whether it is feasible to describe the four-dimensional trajectories (plus time) of two-dimensional (plus time) electroencephalographic traces (EEG). We made use of quaternion orthographic projections to map to the surface of four-dimensional hyperspheres EEG signal patches treated with Fourier analysis. Once achieved the proper quaternion maps, we show that this multi-dimensional procedure brings undoubted benefits. The treatment of EEG traces with Fourier analysis allows the investigation the scale-free activity of the brain in terms of trajectories on hyperspheres and quaternionic networks. Repetitive spatial and temporal patterns undetectable in three dimensions (plus time) are easily enlightened in four dimensions (plus time). Further, a quaternionic approach makes it feasible to identify spatially far apart and temporally distant periodic trajectories with the same features, such as, e.g., the same oscillatory frequency or amplitude. This leads to an incisive operational assessment of global or broken symmetries, domains of attraction inside three-dimensional projections and matching descriptions between the apparently random paths hidden in the very structure of nervous fractal signals.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 263
Author(s):  
Munan Yuan ◽  
Xiru Li ◽  
Longle Cheng ◽  
Xiaofeng Li ◽  
Haibo Tan

Alignment is a critical aspect of point cloud data (PCD) processing, and we propose a coarse-to-fine registration method based on bipartite graph matching in this paper. After data pre-processing, the registration progress can be detailed as follows: Firstly, a top-tail (TT) strategy is designed to normalize and estimate the scale factor of two given PCD sets, which can combine with the coarse alignment process flexibly. Secondly, we utilize the 3D scale-invariant feature transform (3D SIFT) method to extract point features and adopt fast point feature histograms (FPFH) to describe corresponding feature points simultaneously. Thirdly, we construct a similarity weight matrix of the source and target point data sets with bipartite graph structure. Moreover, the similarity weight threshold is used to reject some bipartite graph matching error-point pairs, which determines the dependencies of two data sets and completes the coarse alignment process. Finally, we introduce the trimmed iterative closest point (TrICP) algorithm to perform fine registration. A series of extensive experiments have been conducted to validate that, compared with other algorithms based on ICP and several representative coarse-to-fine alignment methods, the registration accuracy and efficiency of our method are more stable and robust in various scenes and are especially more applicable with scale factors.


Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


2018 ◽  
Author(s):  
D.H. Baker ◽  
G. Vilidaite ◽  
E. McClarnon ◽  
E. Valkova ◽  
A. Bruno ◽  
...  

AbstractThe brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.


Author(s):  
O. Majgaonkar ◽  
K. Panchal ◽  
D. Laefer ◽  
M. Stanley ◽  
Y. Zaki

Abstract. Classifying objects within aerial Light Detection and Ranging (LiDAR) data is an essential task to which machine learning (ML) is applied increasingly. ML has been shown to be more effective on LiDAR than imagery for classification, but most efforts have focused on imagery because of the challenges presented by LiDAR data. LiDAR datasets are of higher dimensionality, discontinuous, heterogenous, spatially incomplete, and often scarce. As such, there has been little examination into the fundamental properties of the training data required for acceptable performance of classification models tailored for LiDAR data. The quantity of training data is one such crucial property, because training on different sizes of data provides insight into a model’s performance with differing data sets. This paper assesses the impact of training data size on the accuracy of PointNet, a widely used ML approach for point cloud classification. Subsets of ModelNet ranging from 40 to 9,843 objects were validated on a test set of 400 objects. Accuracy improved logarithmically; decelerating from 45 objects onwards, it slowed significantly at a training size of 2,000 objects, corresponding to 20,000,000 points. This work contributes to the theoretical foundation for development of LiDAR-focused models by establishing a learning curve, suggesting the minimum quantity of manually labelled data necessary for satisfactory classification performance and providing a path for further analysis of the effects of modifying training data characteristics.


Author(s):  
Arnaud Palha ◽  
Arnadi Murtiyoso ◽  
Jean-Christophe Michelin ◽  
Emmanuel Alby ◽  
Pierre Grussenmeyer

2012 ◽  
Vol 11 ◽  
pp. 7-13
Author(s):  
Dilli Raj Bhandari

The automatic extraction of the objects from airborne laser scanner data and aerial images has been a topic of research for decades. Airborne laser scanner data are very efficient source for the detection of the buildings. Half of the world population lives in urban/suburban areas, so detailed, accurate and up-to-date building information is of great importance to every resident, government agencies, and private companies. The main objective of this paper is to extract the features for the detection of building using airborne laser scanner data and aerial images. To achieve this objective, a method of integration both LiDAR and aerial images has been explored: thus the advantages of both data sets are utilized to derive the buildings with high accuracy. Airborne laser scanner data contains accurate elevation information in high resolution which is very important feature to detect the elevated objects like buildings and the aerial image has spectral information and this spectral information is an appropriate feature to separate buildings from the trees. Planner region growing segmentation of LiDAR point cloud has been performed and normalized digital surface model (nDSM) is obtained by subtracting DTM from the DSM. Integration of the nDSM, aerial images and the segmented polygon features from the LiDAR point cloud has been carried out. The optimal features for the building detection have been extracted from the integration result. Mean height value of the nDSM, Normalized difference vegetation index (NDVI) and the standard deviation of the nDSM are the effective features. The accuracy assessment of the classification results obtained using the calculated attributes was done. Assessment result yielded an accuracy of almost 92 % explaining the features which are extracted by integrating the two data sets was large extent, effective for the automatic detection of the buildings.


Sign in / Sign up

Export Citation Format

Share Document