equal weighting
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 16)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Ning Zhang ◽  
Steven M. Quiring ◽  
Trent W. Ford

AbstractSoil moisture can be obtained from in-situ measurements, satellite observations, and model simulations. This study evaluates the importance of in-situ observations in soil moisture blending, and compares different weighting and sampling methods for combining model, satellite, and in-situ soil moisture data to generate an accurate and spatially-continuous soil moisture product at 4-km resolution. Four different datasets are used: Antecedent Precipitation Index (API), KAPI, which incorporates in-situ soil moisture observations with the API using regression kriging, SMOS L3 soil moisture, and model-simulated soil moisture from the Noah model as part of the North American Land Data Assimilation System (NLDAS). Triple collocation, least square weighting, and equal weighting are used to generate blended soil moisture products. An enumerated weighting scheme is designed to investigate the impact of different weighting schemes. The sensitivity of the blended soil moisture products to sampling schemes, station density and data formats (absolute, anomalies and percentiles) are also investigated. The results reveal KAPI outperforms API. This indicates that incorporating in-situ soil moisture improves the accuracy of the blended soil moisture products. There are no statistically significant (p>0.05) differences between blended soil moisture using triple collocation and equal weighting approaches, and both methods provide sub-optimal weighting. Optimal weighting is achieved by assigning larger weights to KAPI and smaller weights to SMOS. Using multiple sources of soil moisture is helpful for reducing uncertainty and improving accuracy, especially when the sampling density is low, or the sampling stations are less representative. These results are consistent regardless of how soil moisture is represented (absolute, anomalies or percentiles).


2021 ◽  
pp. 53-70
Author(s):  
Guido Abate ◽  
Tommaso Bonafini ◽  
Pierpaolo Ferrari

Following the criticism surrounding capitalization-weighting, both academic and practitioner communities have developed alternative approaches to portfolio construction. We analyze one of these approaches, fundamentals-based weighting, which identifies the weights of portfolio constituents on the basis of their market multiples and accounting ratios. Our analysis is carried out on four fundamentals-weighted portfolios (FW) based on four different weighting variants, the capitalization-weighted portfolio (CW), and the equally-weighted (EW) portfolio, from January 2004 to December 2020, and in two subperiods (2004–2011 and 2011–2020). We find that in the first subperiod, the EW portfolio shows the highest risk-adjusted performance, followed by the FW portfolios. In contrast, in the second subperiod and in the period as a whole, the CW portfolio outperforms the other portfolios in terms of risk-adjusted performance. Overall, we conclude that both FW portfolios and the EW portfolio do not exhibit superior results when compared with the classic CW portfolio. Therefore, we have shown that FW and EW techniques provide superior risk-adjusted performance only during a period of exceptional financial turmoil. However, under normal conditions, they cannot be recommended as a rational investment strategy. JEL classification numbers: G11, G14. Keywords: Fundamental weighting, Capitalization weighting, Equal weighting, Value investing, Indexed investing.


2021 ◽  
Vol 42 (Supplement_1) ◽  
pp. S193-S193
Author(s):  
Samantha Huang ◽  
Justin Dang ◽  
Clifford C Sheckter ◽  
Haig A Yenikomshian ◽  
Justin Gillenwater

Abstract Introduction Current methods of burn evaluation and treatment are subjective and dependent on surgeon experience, with high rates of inter-rater variability leading to inaccurate diagnoses and treatment. Machine learning (ML) and automated methods are being used to develop more objective and accurate methods for burn diagnosis and triage. Defined as a subfield of artificial intelligence that applies algorithms capable of knowledge acquisition, machine learning draws patterns from data, which it can then apply to clinically relevant tasks. This technology has the potential to improve burn management by quantitating diagnoses, improving diagnostic accuracy, and increasing access to burn care. The aim of this systematic review is to summarize the literature regarding machine learning and automated methods for burn wound evaluation and treatment. Methods A systematic review of articles available on PubMed and MEDLINE (OVID) was performed. Keywords used in the search process included burns, machine learning, deep learning, burn classification technology, and mobile applications. Reviews, case reports, and opinion papers were excluded. Data were extracted on study design, study objectives, study models, devices used to capture data, machine learning, or automated software used, expertise level and number of evaluators, and ML accuracy of burn wound evaluation. Results The search identified 592 unique titles. After screening, 35 relevant articles were identified for systematic review. Nine studies used machine learning and automated software to estimate percent total body surface area (%TBSA) burned, 4 calculated fluid requirements, 18 estimated burn depth, 5 estimated need for surgery, 6 predicted mortality, and 2 evaluated scarring in burn patients. Devices used to estimate %TBSA burned showed an accuracy comparable to or better than traditional methods. Burn depth estimation sensitivities resulted in unweighted means >81%, which increased to >83% with equal weighting applied. Mortality prediction sensitivity had an unweighted mean of 96.75%, which increased to 99.35% with equal weighting. Conclusions Machine learning and automated technology are promising tools that provide objective and accurate measures of evaluating burn wounds. Existing methods address the key steps in burn care management; however, existing data reporting on their robustness remain in the early stages. Further resources should be dedicated to leveraging this technology to improve outcomes in burn care.


2021 ◽  
Author(s):  
Luana Lavagnoli Moreira ◽  
Mariana Madruga de Brito ◽  
Masato Kobiyama

Abstract. This paper provides a state-of-art account on flood vulnerability indices, highlighting worldwide trends and future research directions. A total of 95 peer-reviewed articles published between 2002–2019 were systematically analyzed. An exponential rise in research effort is demonstrated, with 80 % of the articles being published since 2015. The majority of these studies (62.1 %) focused on the neighborhood followed by the city scale (14.7 %). Min-max normalization (30.5 %), equal weighting (24.2 %), and linear aggregation (80.0 %) were the most common methods. With regard to the indicators used, a focus was given to socio-economic aspects (e.g. population density, illiteracy rate, gender), whilst components associated with the citizen's coping and adaptive capacity were slightly covered. Gaps in current research include a lack of sensitivity and uncertainty analyzes (present in only 9.5 % and 3.2 % of papers, respectively); inadequate or inexistent validation of the results (present in 13.7 % of the studies); lack of transparency regarding the rationale for weighting and indicator selection; and use of static approaches, disregarding temporal dynamics. We discuss the challenges associated with these findings for the assessment of flood vulnerability and provide a research agenda for attending to these gaps.


2021 ◽  
Vol 3 (1) ◽  
pp. 35-36
Author(s):  
Eric Holloway

Without domain knowledge, an algorithm given an extremely long sequence of 1s would be unsure whether the sequence is completely random.  When asked to predict the next digit, the algorithm can only give an equal weighting to 0 and 1.


Author(s):  
Ray Hilborn

Abstract How do we assess the performance of national and international fisheries management organizations? Many organizations produce measures of the extent of overfishing, typically classifying individual stocks as overfished if they are below some biomass threshold. Most agencies then report their overall status (i.e. percentage overfished, fully exploited, etc.) by giving equal weight to all stocks, regardless of stock size or potential yield. We review the range of indices used to assess overfishing levels and apply them to the data from US fisheries to show how they depict very different performance of fisheries. Given that overfishing is a concept imbedded in the maximization of long-term harvest, we evaluate how well these indices reflect the extent to which fisheries have maximized sustainable yield. Indices that are weighted by the potential yield of the stock much better reflect the regional performance of fisheries but are still limited by the arbitrary use of a threshold abundance. For the United States, weighting by maximum sustainable yield or value suggests that the losses from overfishing are less than existing methods using equal weighting and that underfishing is much more common than overfishing.


2020 ◽  
Author(s):  
Sylvia Niehuis

We introduce an instrument assessing the unique niche of romantic partners feeling they share the same memories and feelings about their relationship: the Shared Reality regarding one’s Relationship (SRR) scale. Existing instruments are less specific, assessing partners’ sense of agreement not only about their relationship, but topics outside of it (e.g., whether they share an opinion about a particular movie). We collected cross-sectional data on 656 romantically partnered individuals (from dating to marriage), testing alpha (equal item-weighting) and composite (non-equal weighting) reliability, and convergent, concurrent, criterion-related, and discriminant validity (two types). We also conducted a test-retest reliability study with a roughly two-three week interval between assessments (N = 58 individuals). The SRR scale exhibited satisfactory psychometric properties in these areas.


2020 ◽  
Author(s):  
Marcel Binz ◽  
Samuel J. Gershman ◽  
Eric Schulz ◽  
Dominik Endres

Numerous researchers have put forward heuristics as models of human decision making. However, where such heuristics come from is still a topic of ongoing debates. In this work we propose a novel computational model that advances our understanding of heuristic decision making by explaining how different heuristics are discovered and how they are selected. This model, called bounded meta-learned inference, is based on the idea that people make environment-specific inferences about which strategies to use, while being efficient in terms of how they use computational resources. We show that our approach discovers two previously suggested types of heuristics -- one reason decision making and equal weighting -- in specific environments. Furthermore, the model provides clear and precise predictions about when each heuristic should be applied: knowing the correct ranking of attributes leads to one reason decision making, knowing the directions of the attributes leads to equal weighting, and not knowing about either leads to strategies that use weighted combinations of multiple attributes. This allows us to gain new insights on mixed results of prior empirical work on heuristic decision making. In three empirical paired comparison studies with continuous features, we verify predictions of our theory, and show that it captures several characteristics of human decision making not explained by alternative theories.


2020 ◽  
Vol 12 (6) ◽  
pp. 2170 ◽  
Author(s):  
Joshua Sohn ◽  
Pierre Bisquert ◽  
Patrice Buche ◽  
Abdelraouf Hecham ◽  
Pradip P. Kalbar ◽  
...  

Despite advances in the data, models, and methods underpinning environmental life cycle assessment (LCA), it remains challenging for practitioners to effectively communicate and interpret results. These shortcomings can bias decisions and hinder public acceptance for planning supported by LCA. This paper introduces a method for interpreting LCA results, the Argumentation Corrected Context Weighting-LCA (ArgCW-LCA), to overcome these barriers. ArgCW-LCA incorporates stakeholder preferences, corrects unjustified disagreements, and allows for the inclusion of non-environmental impacts (e.g., economic, social, etc.) using a novel weighting scheme and the application of multi-criteria decision analysis to provide transparent and context-relevant decision support. We illustrate the utility of the method through two case studies: a hypothetical decision regarding energy production and a real-world decision regarding polyphenol extraction technologies. In each case, we surveyed a relevant stakeholder group on their environmental views and fed their responses into the model to provide decision support that is relevant to their perspective. We found marked differences between results using ArgCW-LCA and results from a conventional analysis using an equal-weighting scheme, as well as differentiation between stakeholder preference groups, indicating the importance of applying the perspective of the particular stakeholder group. For instance, there was a rank reversal of alternatives when comparing between an equal weighting approach for all environmental and economic dimensions and ArgCW-LCA. ArgCW-LCA provides opportunity for both public and private sector incorporation of LCA, such as in developing enlightened stakeholder value measures. This is achieved through enabling the LCA practition to provide public and private actors’ interpreted LCA results in a manner that incorporates educated stakeholder perspectives. Furthermore, the method encourages stakeholder multiplicity through participatory design and policymaking that can enhance public backing of actions that can make society more sustainable.


Sign in / Sign up

Export Citation Format

Share Document