multiple experts
Recently Published Documents


TOTAL DOCUMENTS

220
(FIVE YEARS 66)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Alexandre Triay Bagur ◽  
Paul Aljabar ◽  
Gerard R Ridgway ◽  
Michael Brady ◽  
Daniel Bulte

Pancreatic disease can be spatially inhomogeneous. For this reason, quantitative imaging studies of the pancreas have often targeted the 3 main anatomical pancreatic parts, head, body, and tail, traditionally using a balanced region of interest (ROI) strategy. Existing automated analysis methods have implemented whole-organ segmentation, which provides an overall quantification, but fails to address spatial heterogeneity in disease. A method to automatically refine a whole-organ segmentation of the pancreas into head, body, and tail subregions is presented for abdominal magnetic resonance imaging (MRI). The subsegmentation method is based on diffeomorphic registration to a group average template image, where the parts are manually annotated. For a new whole-pancreas segmentation, the aligned template's part labels are automatically propagated to the segmentation of interest. The method is validated retrospectively on the UK Biobank imaging substudy (scanned using a 2-point Dixon protocol at 1.5 tesla), using a nominally healthy cohort of 100 subjects for template creation, and 50 independent subjects for validation. Pancreas head, body, and tail were annotated by multiple experts on the validation cohort, which served as the benchmark for the automated method's performance. Good intra-rater (Dice overlap mean, Head: 0.982, Body: 0.940, Tail: 0.961, N=30) as well as inter-rater (Dice overlap mean, Head: 0.968, Body: 0.905, Tail: 0.943, N=150) agreement was observed. No significant difference (Wilcoxon rank sum test, DSC, Head: p=0.4358, Body: p=0.0992, Tail: p=0.1080) was observed between the manual annotations and the automated method's predictions. Results on regional pancreatic fat assessment are also presented, by intersecting the 3-D parts segmentation with one 2-D multi-echo gradient-echo slice, available from the same scanning session, that was used to compute MRI proton density fat fraction (MRI-PDFF). Initial application of the method on a type 2 diabetes cohort showed the utility of the method for assessing pancreatic disease heterogeneity.


2021 ◽  
Author(s):  
Toly Chen ◽  
Yu Cheng Wang

Abstract To enhance the effectiveness of projecting the cycle time range of a job in a factory, a hybrid big data analytics and Industry 4.0 (BD-I4) approach is proposed in this study. As a joint application of big data analytics and Industry 4.0, the BD-I4 approach is distinct from existing methods in this field. In the BD-I4 approach, first, each expert constructs a fuzzy deep neural network (FDNN) to project the cycle time range of a job, which is an application of big data analytics (i.e., deep learning). Subsequently, fuzzy weighted intersection (FWI) is applied to aggregate the cycle time ranges projected by experts to consider their unequal authority levels, which is an application of Industry 4.0 (i.e., artificial intelligence). After applying the BD-I4 approach to a real case, the experimental results showed that the proposed methodology improved the projection precision by up to 72%. This result implied that instead of relying on a single expert, seeking the collaboration among multiple experts may be more effective and efficient.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Fatin Amirah Ahmad Shukri ◽  
Zaidi Isa

Mamdani fuzzy inference system has been widely used for potential risk modelling and management. The decision-making is usually provided by multiple experts in the field. The conflicting information in sources from different experts become an open issue and has attracted some researchers to investigate further. Various risk factors in a project caused difficulties for decision makers to make reliable decisions on the whole project since it involves ambiguities, vagueness, and fuzziness. The introduction of the fuzzy inference system to the evaluation of construction risk is capable in explaining its reasoning process and, hence, overcoming such problems. Risk factors under the project management risk were identified through literature sources and from the opinion of experts. It is found that the likelihood and severity of risk is somehow interlinked with the concept of fuzzy theory. For model input and output linguistics variables, the triangular membership function was selected. The methodology employs a fuzzy aggregation system in which an appropriate control action can be determined by the acquisition of expert judgment. A total of 23 rules with logical OR operator, truncation implication, and Mean of Maxima (MoM) method for defuzzification were used to create an effective fuzzy model intended for making decisions. The framework determines the relationship between input and output parameters in if-then rules or mathematical functions using an effective fuzzy arithmetic operator. The study addresses the principle issues of multiexpert opinions based on Mamdani-type decision system and the illustrative example taken from one of medium-sized project held in Malaysia’s construction industry. By comparing with other experimental results, we verify the rationality and reliability of the proposed method.


2021 ◽  
Author(s):  
Thomas W. Keelin ◽  
Ronald A. Howard

Users of probability distributions frequently need to convert data (empirical, simulated, or elicited) into a continuous probability distribution and to update that distribution when new data becomes available. Often, it is unclear which traditional probability distribution(s) to use, fitting to data is laborious and unsatisfactory, little insight emerges, and updating with Bayes rule is impractical. Here we offer an alternative -- a family of continuous probability distributions, fitting methods, and tools that: provide sufficient shape and boundedness flexibility to closely match virtually any probability distribution and most data sets; involve a single set of simple closed-form equations; stimulate potentially valuable insights when applied to empirical data; are simply fit to data with ordinary least squares; are easy to combine (as when weighting the opinion of multiple experts), and, under certain conditions, are easily updated in closed form according to Bayes rule when new data becomes available. The Bayesian updating method is presented in a way that is readily understandable as a fisherman updates his catch probabilities when changing the river on which he fishes. While metalog applications have been shown to improve decision-making, the methods and results herein are broadly applicable to virtually any use of continuous probability in any field of human endeavor. Diverse data sets may be explored and modeled in these new ways with freely available spreadsheets and tools.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mehdi Keshavarz-Ghorabaee

AbstractDistribution is a strategic function of logistics in different companies. Establishing distribution centers (DCs) in appropriate locations helps companies to reach long-term goals and have better relations with their customers. Assessment of possible locations for opening new DCs can be considered as an MCDM (Multi-Criteria Decision-Making) problem. In this study, a decision-making approach is proposed to assess DC locations. The proposed approach is based on Stepwise Weight Assessment Ratio Analysis II (SWARA II), Method based on the Removal Effects of Criteria (MEREC), Weighted Aggregated Sum Product Assessment (WASPAS), simulation, and the assignment model. The assessment process is performed using the subjective and objective criteria weights determined based on multiple experts’ judgments. The decision matrix, subjective weights and objective weights are modeled based on the triangular probability distribution to assess the possible alternatives. Then, using simulation and the assignment model, the final aggregated results are determined. A case of DC locations assessment is addressed to show the applicability of the proposed approach. A comparative analysis is also made to verify the results. The analyses of this study show that the proposed approach is efficient in dealing with the assessment of DC locations, and the final results are congruent with those of existing MCDM methods.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256919
Author(s):  
A. M. Hanea ◽  
D. P. Wilkinson ◽  
M. McBride ◽  
A. Lyon ◽  
D. van Ravenzwaaij ◽  
...  

Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the “best” final prediction. When experts’ performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts’ estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates.


Sign in / Sign up

Export Citation Format

Share Document