Effects of Phase Aberration and Phase Aberration Correction on the Minimum Variance Beamformer

2017 ◽  
Vol 40 (1) ◽  
pp. 15-34 ◽  
Author(s):  
Gustavo Chau ◽  
Jeremy Dahl ◽  
Roberto Lavarello

The minimum variance (MV) beamformer has the potential to enhance the resolution and contrast of ultrasound images but is sensitive to steering vector errors. Robust MV beamformers have been proposed but mainly evaluated in the presence of gross sound speed mismatches, and the impact of phase aberration correction (PAC) methods in mitigating the effects of phase aberration in MV beamformed images has not been explored. In this study, an analysis of the effects of aberration on conventional MV and eigenspace MV (ESMV) beamformers is carried out. In addition, the impact of three PAC algorithms on the performance of MV beamforming is analyzed. The different beamformers were tested on simulated data and on experimental data corrupted with electronic and tissue-based aberration. It is shown that all gains in performance of the MV beamformer with respect to delay-and-sum (DAS) are lost at high aberration strengths. For instance, with an electronic aberration of 60 ns, the lateral resolution of DAS degrades by 17% while MV degrades by 73% with respect to the images with no aberration. Moreover, although ESMV shows robustness at low aberration levels, its degradation at higher aberrations is approximately the same as that of regular MV. It is also shown that basic PAC methods improve the aberrated MV beamformer. For example, in the case of electronic aberration, multi-lag reduces degradation in lateral resolution from 73% to 28% and contrast loss from 85% to 25%. These enhancements allow the combination of MV and PAC to outperform DAS and PAC and ESMV in moderate and strong aberrations. We conclude that the effect of aberration on the MV beamformer is stronger than previously reported in the literature and that PAC is needed to improve its clinical potential.

2020 ◽  
Author(s):  
Ayan Chatterjee ◽  
Ram Bajpai ◽  
Pankaj Khatiwada

BACKGROUND Lifestyle diseases are the primary cause of death worldwide. The gradual growth of negative behavior in humans due to physical inactivity, unhealthy habit, and improper nutrition expedites lifestyle diseases. In this study, we develop a mathematical model to analyze the impact of regular physical activity, healthy habits, and a proper diet on weight change, targeting obesity as a case study. Followed by, we design an algorithm for the verification of the proposed mathematical model with simulated data of artificial participants. OBJECTIVE This study intends to analyze the effect of healthy behavior (physical activity, healthy habits, and proper dietary pattern) on weight change with a proposed mathematical model and its verification with an algorithm where personalized habits are designed to change dynamically based on the rule. METHODS We developed a weight-change mathematical model as a function of activity, habit, and nutrition with the first law of thermodynamics, basal metabolic rate (BMR), total daily energy expenditure (TDEE), and body-mass-index (BMI) to establish a relationship between health behavior and weight change. Followed by, we verified the model with simulated data. RESULTS The proposed provable mathematical model showed a strong relationship between health behavior and weight change. We verified the mathematical model with the proposed algorithm using simulated data following the necessary constraints. The adoption of BMR and TDEE calculation following Harris-Benedict’s equation has increased the model's accuracy under defined settings. CONCLUSIONS This study helped us understand the impact of healthy behavior on obesity and overweight with numeric implications and the importance of adopting a healthy lifestyle abstaining from negative behavior change.


Author(s):  
Grant Duwe

As the use of risk assessments for correctional populations has grown, so has concern that these instruments exacerbate existing racial and ethnic disparities. While much of the attention arising from this concern has focused on how algorithms are designed, relatively little consideration has been given to how risk assessments are used. To this end, the present study tests whether application of the risk principle would help preserve predictive accuracy while, at the same time, mitigate disparities. Using a sample of 9,529 inmates released from Minnesota prisons who had been assessed multiple times during their confinement on a fully-automated risk assessment, this study relies on both actual and simulated data to examine the impact of program assignment decisions on changes in risk level from intake to release. The findings showed that while the risk principle was used in practice to some extent, the simulated results showed that greater adherence to the risk principle would increase reductions in risk levels and minimize the disparities observed at intake. The simulated data further revealed the most favorable outcomes would be achieved by not only applying the risk principle, but also by expanding program capacity for the higher-risk inmates in order to adequately reduce their risk.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


Sign in / Sign up

Export Citation Format

Share Document