Evaluation of an Act

Author(s):  
Paul Weirich

The expected-utility principle asserts that an act’s utility equals its expected utility, that is, a probability-weighted average of the utilities of the act’s possible outcomes. The mean-risk principle asserts that an act’s utility equals the sum of (1) the act’s expected utility ignoring the act’s risk and (2) the intrinsic utility of the act’s risk. The justification of both principles uses the independence of evaluations of risks and prospects, taking them in isolation. The scope of intrinsic evaluations of risks and prospects makes the evaluations independent, and their independence grounds the additivity of evaluations of an act’s risks and prospects, and also the additivity of an evaluation of the act’s risk, in the sense of its exposure to chance, and the act’s evaluation ignoring its risk.

2020 ◽  
pp. 248-250
Author(s):  
Paul Weirich

Recognizing that an act’s risk is a consequence of the act yields a version of expected-utility maximization that does not need adjustments for risk in addition to the probabilities and utilities of possible outcomes. This treatment of an act’s risk justifies the expected-utility principle, and the mean-risk principle, for evaluation of an act. Rational attitudes to risks explain the rationality of acting in accord with the principles. They ground the separability relations that support the principles. The expected-utility principle justifies a substantive, and not just a representational, version of the decision principle of expected-utility maximization. Consequently, the principle governs a single choice and not just sets of choices. It demands more than consistency of the choices in a set. It demands that each choice follow the agent’s preferences, and these preferences explain the rationality of a choice that complies with the principle.


2021 ◽  
Vol 7 (3) ◽  
pp. 46
Author(s):  
Jiajun Zhang ◽  
Georgina Cosma ◽  
Jason Watkins

Demand for wind power has grown, and this has increased wind turbine blade (WTB) inspections and defect repairs. This paper empirically investigates the performance of state-of-the-art deep learning algorithms, namely, YOLOv3, YOLOv4, and Mask R-CNN for detecting and classifying defects by type. The paper proposes new performance evaluation measures suitable for defect detection tasks, and these are: Prediction Box Accuracy, Recognition Rate, and False Label Rate. Experiments were carried out using a dataset, provided by the industrial partner, that contains images from WTB inspections. Three variations of the dataset were constructed using different image augmentation settings. Results of the experiments revealed that on average, across all proposed evaluation measures, Mask R-CNN outperformed all other algorithms when transformation-based augmentations (i.e., rotation and flipping) were applied. In particular, when using the best dataset, the mean Weighted Average (mWA) values (i.e., mWA is the average of the proposed measures) achieved were: Mask R-CNN: 86.74%, YOLOv3: 70.08%, and YOLOv4: 78.28%. The paper also proposes a new defect detection pipeline, called Image Enhanced Mask R-CNN (IE Mask R-CNN), that includes the best combination of image enhancement and augmentation techniques for pre-processing the dataset, and a Mask R-CNN model tuned for the task of WTB defect detection and classification.


Author(s):  

The article considers main physical and geographical factors affecting the runoff, spring flood of rivers in the Arpa River basin, and analyzes the regularities of their spacetime distribution. The authors have obtained correlation relationship between the values of the flood runoff layer, the mean module maximum runoff and weighted average height of the catchment area of the Arpa River, between the mean annual maximum runoff module for the period floods and catchment areas of rivers. These dependencies can be used for preliminary estimates of the spring flood runoff of unexplored rivers of the territory under consideration. A close correlation between the values of the annual runoff and the runoff of the spring flood in the section of the Arpa River – Dzhermuk has been also revealed. It can be used for forecasting the annual flow.


2013 ◽  
Vol 15 (1) ◽  
pp. 115 ◽  
Author(s):  
A. HATTOUR ◽  
W. KOCHED

The present study analysis size and weight-frequency composition of Atlantic bluefin tuna (Thunnus thynnus thynnus) fattened in Tunisian farms for the period 2005-2010 and compare these morphometric parameters with those from wild bluefin tuna landed on 2001 at Sfax port (Tunisia). A total of 6,757 wild and fattened bluefin tuna were measured as straight-line fork length and 49,962 were weighted. Average value of K for wild BFT was 1.59 and respectively 2.43, 2.32, 2.15, 1.61, 1.79 and 1.90 for Fattened BFT after 5-6 months from 2005 to 2010. Length frequency of fattened bluefin showed clearly a substantial increase in juvenile rate. The percentage which was 21.4% in 2005 reached 31.3% in 2009. For weight distribution, 73.3% of the fish caught in 2001 are below the annual mean (75.7 kg), while means 71 to 72% of fattened fish were under annual mean weight. Year 2009 is exceptional because only 57% of fattened fish were under the mean weight. This demonstrates that the fish caught are becoming increasingly small. Mean weight for fattening period (77 to 124 kg) are obviously higher than those of the wild fish (75,7kg).This study showed an increment in the amount of specimen under first sexual maturity which will not have the chance to spawn.


2020 ◽  
pp. 196-220
Author(s):  
Paul Weirich

Governments regulate risks on behalf of the people they serve. Given that regulatory agencies aim for regulatory measures that the public would endorse if rational and informed, the mean-risk method of evaluating acts provides valuable guidance. It offers a way of constructing for a citizen informed probability and utility assignments for a regulation’s possible outcomes, and using these assignments to obtain for the citizen an informed utility assignment for the regulation. The theory of cooperative games combines the utility assignments of multiple agents to support a collective act, and under simplifying assumptions, supports an act that maximizes collective utility, defined as a sum of the act’s utilities for the agents, in the tradition of utilitarianism. This approach to regulation accommodates acts targeting information-sensitive, evidential risks as well as acts targeting physical risks. Verification of a reduction in an evidential risk can meet the standards of objectivity that the law adopts.


2020 ◽  
Vol 13 (7) ◽  
pp. 155
Author(s):  
Zhenlong Jiang ◽  
Ran Ji ◽  
Kuo-Chu Chang

We propose a portfolio rebalance framework that integrates machine learning models into the mean-risk portfolios in multi-period settings with risk-aversion adjustment. In each period, the risk-aversion coefficient is adjusted automatically according to market trend movements predicted by machine learning models. We employ Gini’s Mean Difference (GMD) to specify the risk of a portfolio and use a set of technical indicators generated from a market index (e.g., S&P 500 index) to feed the machine learning models to predict market movements. Using a rolling-horizon approach, we conduct a series of computational tests with real financial data to evaluate the performance of the machine learning integrated portfolio rebalance framework. The empirical results show that the XGBoost model provides the best prediction of market movement, while the proposed portfolio rebalance strategy generates portfolios with superior out-of-sample performances in terms of average returns, time-series cumulative returns, and annualized returns compared to the benchmarks.


1989 ◽  
Vol 71 (5) ◽  
pp. 673-680 ◽  
Author(s):  
Claudia S. Robertson ◽  
Raj K. Narayan ◽  
Charles F. Contant ◽  
Robert G. Grossman ◽  
Ziya L. Gokaslan ◽  
...  

✓ Intracranial compliance, as estimated from a computerized frequency analysis of the intracranial pressure (ICP) waveform, was continuously monitored during the acute postinjury phase in 55 head-injured patients. In previous studies, the high-frequency centroid (HFC), which was defined as the power-weighted average frequency within the 4- to 15-Hz band of the ICP power density spectrum, was found to inversely correlate with the pressure-volume index (PVI). An HFC of 6.5 to 7.0 Hz was normal, while an increase in the HFC to 9.0 Hz coincided with a reduction in the PVI to 13 ml and indicated exhaustion of intracranial volume-buffering capacity. The mean HFC for individual patients in the present study ranged from 6.8 to 9.0 Hz, and the length of time that the HFC was greater than 9.0 Hz ranged from 0 to 104.8 hours. The mortality rate increased concomitantly with the mean HFC, from 7% when the mean HFC was less than 7.5 Hz to 46% when the mean HFC was 8.5 Hz or greater. The length of time that the HFC was 9.0 Hz or greater was also associated with an increased mortality rate, which ranged from 16% if the HFC was never above 9.0 Hz to 60% if the HFC was 9.0 Hz or greater for more than 12 hours. In 12 patients who developed uncontrollable intracranial hypertension or clinical signs of tentorial herniation during the monitoring period, 75% were observed to have had an increase in the HFC to 9.0 Hz or more 1 to 36 hours prior to the clinical decompensation. The more rapid the increase in the HFC, the more likely the deterioration was to be caused by an intracranial hematoma. Continuous monitoring of intracranial compliance by computerized analysis of the ICP waveform may provide an earlier warning of neurological decompensation than ICP per se and, unlike PVI, does not require volumetric manipulation of intracranial volume.


2017 ◽  
Vol 2017 ◽  
pp. 1-9
Author(s):  
Xiao-Lei Wang ◽  
Da-Gang Lu

The mean seismic probability risk model has widely been used in seismic design and safety evaluation of critical infrastructures. In this paper, the confidence levels analysis and error equations derivation of the mean seismic probability risk model are conducted. It has been found that the confidence levels and error values of the mean seismic probability risk model are changed for different sites and that the confidence levels are low and the error values are large for most sites. Meanwhile, the confidence levels of ASCE/SEI 43-05 design parameters are analyzed and the error equation of achieved performance probabilities based on ASCE/SEI 43-05 is also obtained. It is found that the confidence levels for design results obtained using ASCE/SEI 43-05 criteria are not high, which are less than 95%, while the high confidence level of the uniform risk could not be achieved using ASCE/SEI 43-05 criteria and the error values between risk model with target confidence level and mean risk model using ASCE/SEI 43-05 criteria are large for some sites. It is suggested that the seismic risk model considering high confidence levels instead of the mean seismic probability risk model should be used in the future.


1990 ◽  
Vol 6 (1) ◽  
pp. 173-182 ◽  
Author(s):  
Chang-Chuan Chan ◽  
Yukio Yanagisawa ◽  
John D. Spengler

Indoor and outdoor nitrogen dioxide (NO2) concentrations of 23 homes from two areas in Taiwan, the city of Taipei and a rural village in central Taiwan, were measured concurrently from December 1987 to January 1988. NO2 measurements were carried out by Palmes tube for one week and filter badges for two days. In Taipei, the mean NO2 concentrations outdoors, in the kitchens, in the livingrooms, and in the bedrooms were 40.1 ppb, 34.4 ppb, 32.1 ppb, and 29.7 ppb for one week, and were 25.7 ppb, 25.6 ppb, 22.6 ppb, and 20.5 ppb for two days. In the village of central Taiwan, the corresponding concentrations were 23.5 ppb, 24.5 ppb, 20.4 ppb, and 17.5 ppb for one week, and 20.3 ppb, 24.7 ppb, 18.8 ppb, and 15.4 ppb for two days. The NO2 concentrations of all microenvironments in Taipei were significantly higher than those in the village of central Taiwan. The outdoor NO2 concentrations were significantly higher than the indoor NO2 concentrations in Taipei. The NO2 measurements in the kitchens were higher than all other measurements indoors and outdoors in the village of central Taiwan. The houses which used natural gas as cooking fuel had slightly higher indoor NO2 concentrations than the houses which used LPG as cooking fuel in Taipei city. Cement houses had slightly higher indoor NO2 concentrations than brick houses. The mean of housewives' exposures was 30.8 ppb in Taipei and 19.9 ppb in the village of central Taiwan. The explanation power of the housewife's exposure to NO2 was 72% by the time weighted-average model and 70% by the simple linear regression model.


Sign in / Sign up

Export Citation Format

Share Document