scholarly journals Quality Assurance of Image Registration Using Combinatorial Rigid Registration Optimization (CORRO)

2021 ◽  
Vol 5 (3) ◽  
pp. 01-09
Author(s):  
Afua A. Yorke ◽  
Gary C. McDonald ◽  
David Solis ◽  
Thomas Guerrero

Purpose: Expert selected landmark points on clinical image pairs to provide a basis for rigid registration validation. Using combinatorial rigid registration optimization (CORRO) provide a statistically characterized reference data set for image registration of the pelvis by estimating optimal registration. Materials ad Methods: Landmarks for each CT/CBCT image pair for 58 cases were identified. From the landmark pairs, combination subsets of k-number of landmark pairs were generated without repeat, forming k-set for k=4, 8, and 12. A rigid registration between the image pairs was computed for each k-combination set (2,000-8,000,000). The mean and standard deviation of the registration were used as final registration for each image pair. Joint entropy was used to validate the output results. Results: An average of 154 (range: 91-212) landmark pairs were selected for each CT/CBCT image pair. The mean standard deviation of the registration output decreased as the k-size increased for all cases. In general, the joint entropy evaluated was found to be lower than results from commercially available software. Of all 58 cases 58.3% of the k=4, 15% of k=8 and 18.3% of k=12 resulted in the better registration using CORRO as compared to 8.3% from a commercial registration software. The minimum joint entropy was determined for one case and found to exist at the estimated registration mean in agreement with the CORRO algorithm. Conclusion: The results demonstrate that CORRO works even in the extreme case of the pelvic anatomy where the CBCT suffers from reduced quality due to increased noise levels. The estimated optimal registration using CORRO was found to be better than commercially available software for all k-sets tested. Additionally, the k-set of 4 resulted in overall best outcomes when compared to k=8 and 12, which is anticipated because k=8 and 12 are more likely to have combinations that affected the accuracy of the registration.

2021 ◽  
Author(s):  
Gerald Eichstädt ◽  
John Rogers ◽  
Glenn Orton ◽  
Candice Hansen

<p>We derive Jupiter's zonal vorticity profile from JunoCam images, with Juno's polar orbit allowing the observation of latitudes that are difficult to observe from Earth or from equatorial flybys.  We identify cyclonic local vorticity maxima near 77.9°, 65.6°, 59.3°, 50.9°, 42.4°, and 34.3°S planetocentric at a resolution of ~1°, based on analyzing selected JunoCam image pairs taken during the 16 Juno perijove flybys 15-30. We identify zonal anticyclonic local vorticity maxima near 80.7°, 73.8°, 62.1°, 56.4°, 46.9°, 38.0°, and 30.7°S.  These results agree with the known zonal wind profile below 64°S, and reveal novel structure further south, including a prominent cyclonic band centered near 66°S. The anticyclonic vorticity maximum near 73.8°S represents a broad and skewed fluctuating anticyclonic band between ~69.0° and ~76.5°S, and is hence poorly defined. This band may even split temporarily into two or three bands.  The cyclonic vorticity maximum near 77.9°S appears to be fairly stable during these flybys, probably representing irregular cyclonic structures in the region. The area between ~82° and 90°S is relatively small and close to the terminator, resulting in poor statistics, but generally shows a strongly cyclonic mean vorticity, representing the well-known circumpolar cyclone cluster.</p><p>The latitude range between ~30°S and ~85°S was particularly well observed, allowing observation periods lasting several hours. For each considered perijove we selected a pair of images separated by about 30 - 60 minutes. We derived high-passed and contrast-normalized south polar equidistant azimuthal maps of Jupiter's cloud tops. They were used to derive maps of local rotation at a resolution of ~1° latitude by stereo-corresponding Monte-Carlo-distributed and Gauss-weighted round tiles for each image pair considered. Only the rotation portion of the stereo correspondence between tiles was used to sample the vorticity maps. For each image pair, we rendered ~40 vorticity maps with different Monte-Carlo runs. The standard deviation of the resulting statistics provided a criterion to define a valid area of the mean vorticity map. Averaging vorticities along circles centered on the south pole returned a zonal vorticity profile for each of the perijoves considered. Averaging the resulting zonal vorticity profiles built the basis for a discussion of the mean profile.</p><p>JunoCam also images the northern hemisphere, at higher resolution but with coverage restricted to a briefer time span and smaller area due to the nature of Juno's elliptical orbit, which will restrict our ability to obtain zonal vorticity profiles.</p>


2015 ◽  
Vol 8 (4) ◽  
pp. 1799-1818 ◽  
Author(s):  
R. A. Scheepmaker ◽  
C. Frankenberg ◽  
N. M. Deutscher ◽  
M. Schneider ◽  
S. Barthlott ◽  
...  

Abstract. Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio data set from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The data set is extended with 2 additional years, now covering 2003–2007, and is validated against co-located ground-based total column δD measurements from Fourier transform spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap among the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of −35 ± 30‰ compared to TCCON and −69 ± 15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ −60 to −80‰) at the highest latitudes and smallest (∼ −20 to −30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity- and latitude-dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY–FTS differences is ∼ 115‰, which is reduced by a factor ∼ 2 when we consider monthly means. When we relax the strict matching of individual measurements and focus on the mean seasonalities using all available FTS data, we find that the correlation coefficients between SCIAMACHY and the FTS networks improve from 0.2 to 0.7–0.8. Certain ground stations show a clear asymmetry in δD during the transition from the dry to the wet season and back, which is also detected by SCIAMACHY. This asymmetry points to a transition in the source region temperature or location of the water vapour and shows the added information that HDO/H2O measurements provide when used in combination with variations in humidity.


2021 ◽  
pp. 57-89
Author(s):  
Charles Auerbach

In this chapter readers will learn about methodological issues to consider in analyzing the success of the intervention and how to conduct visual analysis. The chapter begins with a discussion of descriptive statistics that can aid the visual analysis of findings by summarizing patterns of data across phases. An example data set is used to illustrate the use of specific graphs, including box plots, standard deviation band graphs, and line charts showing the mean, median, and trimmed mean that can used to compare any two phases. SSD for R provides three standard methods for computing effect size, which are discussed in detail. Additionally, four methods of evaluating effect size using non-overlap methods are examined. The use of the goal line is discussed. The chapter concludes with a discussion of autocorrelation in the intervention phase and how to consider dealing with this issue.


Author(s):  
Mark J. DeBonis

One classic example of a binary classifier is one which employs the mean and standard deviation of the data set as a mechanism for classification. Indeed, principle component analysis has played a major role in this effort. In this paper, we propose that one should also include skew in order to make this method of classification a little more precise. One needs a simple probability distribution function which can be easily fit to a data set and use this pdf to create a classifier with improved error rates and comparable to other classifiers.


2006 ◽  
Vol 6 (3) ◽  
pp. 831-846 ◽  
Author(s):  
X. Calbet ◽  
P. Schlüssel

Abstract. The Empirical Orthogonal Function (EOF) retrieval technique consists of calculating the eigenvectors of the spectra to later perform a linear regression between these and the atmospheric states, this first step is known as training. At a later stage, known as performing the retrievals, atmospheric profiles are derived from measured atmospheric radiances. When EOF retrievals are trained with a statistically different data set than the one used for retrievals two basic problems arise: significant biases appear in the retrievals and differences between the covariances of the training data set and the measured data set degrade them. The retrieved profiles will show a bias with respect to the real profiles which comes from the combined effect of the mean difference between the training and the real spectra projected into the atmospheric state space and the mean difference between the training and the atmospheric profiles. The standard deviations of the difference between the retrieved profiles and the real ones show different behavior depending on whether the covariance of the training spectra is bigger, equal or smaller than the covariance of the measured spectra with which the retrievals are performed. The procedure to correct for these effects is shown both analytically and with a measured example. It consists of first calculating the average and standard deviation of the difference between real observed spectra and the calculated spectra obtained from the real atmospheric state and the radiative transfer model used to create the training spectra. In a later step, measured spectra must be bias corrected with this average before performing the retrievals and the linear regression of the training must be performed adding noise to the spectra corresponding to the aforementioned calculated standard deviation. This procedure is optimal in the sense that to improve the retrievals one must resort to using a different training data set or a different algorithm.


2011 ◽  
Vol 197-198 ◽  
pp. 1626-1630
Author(s):  
Chih Chung Ni

Three sets of fatigue crack growth data tested under different constant-amplitude loads for CT specimens made of 2024 T-351 aluminum alloy are released, and the analyzed results presented in this study are specially emphasized on the correlation between statistics of these scattered fatigue data and their applied loads. Investigating the scatters of initiation cycle and specimen life, it was found that both the mean and standard deviation of initiation cycle, as well as the mean and standard deviation of specimen life, decrease as applied stress amplitude increases. Moreover, the negatively linear correlation between the median values of initiation cycle and applied stress amplitudes presented in linear scale, and between the median values of specimen life and applied stress amplitudes presented in logarithmic scale were found, where the initiation cycle and specimen life are all best depicted by normal distributions for all three data sets. Finally, the mean of intercepts and mean of exponents of Paris-Erdogan law for each data set were studied, and it was found that the mean of intercepts decreases greatly as applied stress amplitude increases, while the mean of exponents decreases slightly.


Author(s):  
Eddy Alecia Love Lavalais ◽  
Tayler Jackson ◽  
Purity Kagure ◽  
Myra Michelle DeBose ◽  
Annette McClinton

Background: Identifying nurse burnout to be of significance, as it directly impacts work ethic, patient satisfaction, safety and best practice. Nurses are more susceptible to fatigue and burnout, due to the fact of working in highly stressful environments and caring for people in their most vulnerable state. It is imperative to pinpoint and alleviate potential aspects that can lead to nurse burnout. Research Hypothesis: Educating nurses on recognizing factors influencing nurse burnout and offering effective interventions to combat stress, will lead to better coping and adaptation skills; hence, decreasing the level of nurse fatigue and burnout. Assisting nurses to be cognizant of the symptoms of stress and nurse burnout will lead to the development of positive adaptive mechanisms. However, nurses without this recognition, tend to develop maladaptive psychological skills. Research Methodology: The quality improvement project gathered data on factors influencing burnout via Maslach Burnout Inventory Tool (MBI). MBI is the most commonly used instrument in measuring burnout, by capturing three subscales of burnout: emotional exhaustion (EE), depersonalization (DP) and personal accomplishment (PA). Results: From a sample of 31 graduate nursing (employed) students, MBI survey was administered via survey monkey. Gathered data (n=31), via descriptive statistics and standard deviation, represented the extent of deviation for the nursing population as a whole. The quality improvement study revealed the standard deviation (SD) for emotional exhaustion, a low SD of 0.3; indicating that data points appear to be closer to the mean (expected value) of the emotional exhaustion data set. Additionally, depersonalization data showed SD values that were widely spread; however, yielding a low SD of 0.42 from the mean on depersonalization. Lastly, higher scores derived from Maslach’s Burnout Inventory tool suggests increased levels of personal accomplishment. Thus, data set revealed lower levels of depersonalization in regards to sample size.  Moreover, Pearson correlation coefficient (Pearson r) identified a positive correlation between independent variable of stress levels and factors influencing nurse burnout, with combined teaching of ways to combat stress in the workplace. Effectiveness of this was reported by ninety-eight percent (98%) of participants. Significance: This study maintains that limited emotional exhaustion, a strong sense of identity and achieving personal accomplishments minimizes burn out.


2014 ◽  
Vol 26 (05) ◽  
pp. 1450051
Author(s):  
Shuo Dong ◽  
Yuan Liu ◽  
Lixin Cai ◽  
Mei Bai ◽  
Hanmin Yan

Surgical treatment has been proved to be an effective way to control seizures for some kinds of intractable epilepsy. The electrocorticogram (ECoG) recorded from subdural electrodes has become an important technique for defining epileptogenic zones before surgery in clinical practice. The exact location of subdural electrodes has to be determined to establish the connection between electrodes and epileptogenic zones. Artifacts caused by the electrodes can severely affect the quality of CT imaging and sequentially image registration. In this paper, we discussed the performance of mean squares and the Mattes mutual information metric methods in multimodal image registration for subdural electrode localization. Since the skull can be regarded as a rigid body, rigid registration is sufficient for the purpose of subdural electrode localization. The vital parameter for the rigid registration is rotation. The translation result depends on the result of rotation. Both the methods performed well in the determination of the rotation center. Rotation angles of different image pairs of the same volume pair fluctuated a lot. Based on the image acquisition process, we assume that the images within the same volume pair should have the same transformation parameters for registration. Results show that the mean rotation angles of images within one dataset are approximate to the manual results that are considered to be the actual result for registration despite their fluctuation range.


1996 ◽  
Vol 89 (8) ◽  
pp. 688-692
Author(s):  
Charles Vonder Embse ◽  
Arne Engebretsen

Summary statistics used to describe a data set are some of the most commonly taught statistical concepts in the secondary curriculum. Mean, median, mode, range, and standard deviation are topics that can be found in nearly every program. Technology empowers us to access these concepts and easily to create visual displays that interpret and describe the data in ways that enhance students' understanding. Many graphing calculators allow students to display nonparametric statistical information using a box-and-whiskers plot or a modified box plot showing a visual representation of the median, upper and lower quartiles, and the range of the data. But how can students visually display the mean of the data or show what it means to be within one standard deviation of the mean? One way to create this type of visual display is with a bar graph and constant functions. Unfortunately, graphing calculators, and some computer programs, only display histograms and not bar graphs. The tips in this issue focus on using graphing calculators to draw bar graphs that can help students visualize and interpret the mean and standard deviation of a data set.


Author(s):  
Joseph G. Eisenhauer

A major challenge confronting meta-analysts seeking to synthesize existing empirical research on a given topic is the frequent failure of primary studies to fully report their sample statistics.  Because such research cannot be included in a meta-analysis unless the unreported statistics can somehow be recovered, a number of methods have been devised to estimate the sample mean and standard deviation from other quantities.  This note compares several recently proposed sets of estimators that rely on extrema and/or quartiles to estimate unreported statistics for any given sample.  The simplest method relies on an underlying model of normality, while the more complex methods are explicitly designed to accommodate non-normality.  Our empirical comparison uses a previously developed data set containing 58 samples, ranging in size from 48 to 2,528 observations, from a standard depression screening instrument, the nine-item Patient Health Questionnaire (PHQ-9).  When only the median and extrema are known, we find that the estimation method based on normality yields the most accurate estimates of both the mean and standard deviation, despite the existence of asymmetry throughout the data set; and when other information is given, the normality-based estimators have accuracy comparable to that of the other estimators reviewed here.  Additionally, if the sample size is unknown, the method based on normality is the only feasible approach.  The simplicity of the normality-based approach provides an added convenience for practitioners. 


Sign in / Sign up

Export Citation Format

Share Document