majority consensus
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 1)

Conservation ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 121-138
Author(s):  
Spartaco Gippoliti ◽  
Jan Robovský ◽  
Francesco M. Angelici

Ecotourism can fuel an important source of financial income for African countries and can therefore help biodiversity policies in the continent. Translocations can be a powerful tool to spread economic benefits among countries and communities; yet, to be positive for biodiversity conservation, they require a basic knowledge of conservation units through appropriate taxonomic research. This is not always the case, as taxonomy was considered an outdated discipline for almost a century, and some plurality in taxonomic approaches is incorrectly considered as a disadvantage for conservation work. As an example, diversity of the genus Giraffa and its recent taxonomic history illustrate the importance of such knowledge for a sound conservation policy that includes translocations. We argue that a fine-grained conservation perspective that prioritizes all remaining populations along the Nile Basin is needed. Translocations are important tools for giraffe diversity conservation, but more discussion is needed, especially for moving new giraffes to regions where the autochthonous taxa/populations are no longer existent. As the current discussion about the giraffe taxonomy is too focused on the number of giraffe species, we argue that the plurality of taxonomic and conservation approaches might be beneficial, i.e., for defining the number of units requiring separate management using a (majority) consensus across different concepts (e.g., MU—management unit, ESU—evolutionary significant unit, and ECU—elemental conservation unit). The taxonomically sensitive translocation policy/strategy would be important for the preservation of current diversity, while also supporting the ecological restoration of some regions within rewilding. A summary table of the main translocation operations of African mammals that have underlying problems is included. Therefore, we call for increased attention toward the taxonomy of African mammals not only as the basis for sound conservation but also as a further opportunity to enlarge the geographic scope of ecotourism in Africa.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Qing Ning ◽  
Dali Wang ◽  
Fei Cheng ◽  
Yuheng Zhong ◽  
Qi Ding ◽  
...  

Abstract Background Mutations in an enzyme target are one of the most common mechanisms whereby antibiotic resistance arises. Identification of the resistance mutations in bacteria is essential for understanding the structural basis of antibiotic resistance and design of new drugs. However, the traditionally used experimental approaches to identify resistance mutations were usually labor-intensive and costly. Results We present a machine learning (ML)-based classifier for predicting rifampicin (Rif) resistance mutations in bacterial RNA Polymerase subunit β (RpoB). A total of 186 mutations were gathered from the literature for developing the classifier, using 80% of the data as the training set and the rest as the test set. The features of the mutated RpoB and their binding energies with Rif were calculated through computational methods, and used as the mutation attributes for modeling. Classifiers based on five ML algorithms, i.e. decision tree, k nearest neighbors, naïve Bayes, probabilistic neural network and support vector machine, were first built, and a majority consensus (MC) approach was then used to obtain a new classifier based on the classifications of the five individual ML algorithms. The MC classifier comprehensively improved the predictive performance, with accuracy, F-measure and AUC of 0.78, 0.83 and 0.81for training set whilst 0.84, 0.87 and 0.83 for test set, respectively. Conclusion The MC classifier provides an alternative methodology for rapid identification of resistance mutations in bacteria, which may help with early detection of antibiotic resistance and new drug discovery.


Endocrines ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 91-98
Author(s):  
Maria I. Linares Linares Valderrama ◽  
Marilyn A. Arosemena ◽  
Anu Thekkumkattil ◽  
Russ A. Kuker ◽  
Rosa P. Castillo ◽  
...  

Background: Substantial inter-observer variation has been documented in the recognition and description of specific sonographic features as well as for ATA sonographic risk (ASR). This raises the question if the risk stratification proposed by the ATA guidelines is reproducible and applicable for nodules with indeterminate cytology. The aim of the study was to determine the inter-reader agreement (IRR) among radiologists using the 2015 ASR stratification in indeterminate thyroid nodules. Methods: Three board certified radiologists who were blinded to clinical data and to each other, interpreted the ultrasound findings of 179 nodules that had Bethesda III cytology. The nodules were classified into high suspicion (HS), intermediate (IS), low (LS), very low (VLS). Echogenicity, composition, shape taller than wide, vascularity, type of margins, presence and type of calcifications were also described. Results: The majority consensus revealed that 28%, 27%, 39% and 5% were described as high, intermediate, low and very low ASR, respectively. The inter-reader agreement was near perfect (k 0.82 CI 95% (0.77–0.87)). Nodules were paired into a higher risk (HS + IS) and lower risk (LS + VLS) categories with substantial agreement (k 0.7) in both categories. Conclusion: A near perfect agreement among readers was observed when stratifying indeterminate cytology nodules for ASR.


2020 ◽  
Author(s):  
Qing Ning ◽  
Dali Wang ◽  
Fei Cheng ◽  
Yuheng Zhong ◽  
Qi Ding ◽  
...  

Abstract BackgroundMutations in an enzyme target are one of the most common mechanisms whereby antibiotic resistance arises. Identification of the resistance mutations in bacteria is essential for understanding the structural basis of antibiotic resistance and design of new drugs. However, the traditionally used experimental approaches to identify resistance mutations were usually labor-intensive and costly. ResultsWe present a machine learning (ML)-based classifier for predicting rifampicin (Rif) resistance mutations in bacterial RNA Polymerase subunit β (RpoB). A total of 66 resistance mutations were gathered from the literature to form positive dataset, while 53 residue variations of RpoB among a series of naturally occurring species were obtained as negative database. The features of the mutated RpoB and their binding energies with Rif were calculated through computational methods, and used as the mutation attributes for modelling. Classifiers based on four ML algorithms, i.e. decision tree, k nearest neighbors, naïve Bayes and supporting vector machine, were developed, which showed accuracy ranging from 0.69 to 0.76. A majority consensus approach was then used to obtain a new classifier based on the classifications of the four individual ML algorithms. The majority consensus classifier significantly improved the predictive performance, with accuracy, precision, recall and specificity of 0.83, 0.84, 0.86 and 0.83, respectively. ConclusionThe majority consensus classifier provides an alternative methodology for rapid identification of resistance mutations in bacteria, which may help with early detection of antibiotic resistance and new drug discovery.


2020 ◽  
pp. 174077452097512
Author(s):  
Ethan Basch ◽  
Claus Becker ◽  
Lauren J Rogak ◽  
Deborah Schrag ◽  
Bryce B Reeve ◽  
...  

Background: The Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events is an item library designed for eliciting patient-reported adverse events in oncology. For each adverse event, up to three individual items are scored for frequency, severity, and interference with daily activities. To align the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events with other standardized tools for adverse event assessment including the Common Terminology Criteria for Adverse Events, an algorithm for mapping individual items for any given adverse event to a single composite numerical grade was developed and tested. Methods: A five-step process was used: (1) All 179 possible Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events score combinations were presented to 20 clinical investigators to subjectively map combinations to single numerical grades ranging from 0 to 3. (2) Combinations with <75% agreement were presented to investigator committees at a National Clinical Trials Network cooperative group meeting to gain majority consensus via anonymous voting. (3) The resulting algorithm was refined via graphical and tabular approaches to assure directional consistency. (4) Validity, reliability, and sensitivity were assessed in a national study dataset. (5) Accuracy for delineating adverse events between study arms was measured in two Phase III clinical trials (NCT02066181 and NCT01522443). Results: In Step 1, 12/179 score combinations had <75% initial agreement. In Step 2, majority consensus was reached for all combinations. In Step 3, five grades were adjusted to assure directional consistency. In Steps 4 and 5, composite grades performed well and comparably to individual item scores on validity, reliability, sensitivity, and between-arm delineation. Conclusion: A composite grading algorithm has been developed and yields single numerical grades for adverse events assessed via the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events, and can be useful in analyses and reporting.


2020 ◽  
Vol 16 (4) ◽  
pp. 285-295
Author(s):  
Fatima Zohra Ennaji ◽  
Abdelaziz El Fazziki ◽  
Hasna El Alaoui El Abdallaoui ◽  
Hamada El Kabtane

As social networking has spread, people started sharing their personal opinions and thoughts widely via these online platforms. The resulting vast valuable data represent a rich source for companies to deduct their products’ reputation from both social media and crowds’ judgments. To exploit this wealth of data, a framework was proposed to collect opinions and rating scores respectively from social media and crowdsourcing platform to perform sentiment analysis, provide insights about a product and give consumers’ tendencies. During the analysis process, a consumer category (strict) is excluded from the process of reaching a majority consensus. To overcome this, a fuzzy clustering is used to compute consumers’ credibility. The key novelty of our approach is the new layer of validity check using a crowdsourcing component that ensures that the results obtained from social media are supported by opinions extracted directly from real-life consumers. Finally, experiments are carried out to validate this model (Twitter and Facebook were used as data sources). The obtained results show that this approach is more efficient and accurate than existing solutions thanks to our two-layer validity check design.


Author(s):  
Baharan Kamousi ◽  
Suganya Karunakaran ◽  
Kapil Gururangan ◽  
Matthew Markert ◽  
Barbara Decker ◽  
...  

Abstract Introduction Current electroencephalography (EEG) practice relies on interpretation by expert neurologists, which introduces diagnostic and therapeutic delays that can impact patients’ clinical outcomes. As EEG practice expands, these experts are becoming increasingly limited resources. A highly sensitive and specific automated seizure detection system would streamline practice and expedite appropriate management for patients with possible nonconvulsive seizures. We aimed to test the performance of a recently FDA-cleared machine learning method (Claritγ, Ceribell Inc.) that measures the burden of seizure activity in real time and generates bedside alerts for possible status epilepticus (SE). Methods We retrospectively identified adult patients (n = 353) who underwent evaluation of possible seizures with Rapid Response EEG system (Rapid-EEG, Ceribell Inc.). Automated detection of seizure activity and seizure burden throughout a recording (calculated as the percentage of ten-second epochs with seizure activity in any 5-min EEG segment) was performed with Claritγ, and various thresholds of seizure burden were tested (≥ 10% indicating ≥ 30 s of seizure activity in the last 5 min, ≥ 50% indicating ≥ 2.5 min of seizure activity, and ≥ 90% indicating ≥ 4.5 min of seizure activity and triggering a SE alert). The sensitivity and specificity of Claritγ’s real-time seizure burden measurements and SE alerts were compared to the majority consensus of at least two expert neurologists. Results Majority consensus of neurologists labeled the 353 EEGs as normal or slow activity (n = 249), highly epileptiform patterns (HEP, n = 87), or seizures [n = 17, nine longer than 5 min (e.g., SE), and eight shorter than 5 min]. The algorithm generated a SE alert (≥ 90% seizure burden) with 100% sensitivity and 93% specificity. The sensitivity and specificity of various thresholds for seizure burden during EEG recordings for detecting patients with seizures were 100% and 82% for ≥ 50% seizure burden and 88% and 60% for ≥ 10% seizure burden. Of the 179 EEG recordings in which the algorithm detected no seizures, seizures were identified by the expert reviewers in only two cases, indicating a negative predictive value of 99%. Discussion Claritγ detected SE events with high sensitivity and specificity, and it demonstrated a high negative predictive value for distinguishing nonepileptiform activity from seizure and highly epileptiform activity. Conclusions Ruling out seizures accurately in a large proportion of cases can help prevent unnecessary or aggressive over-treatment in critical care settings, where empiric treatment with antiseizure medications is currently prevalent. Claritγ’s high sensitivity for SE and high negative predictive value for cases without epileptiform activity make it a useful tool for triaging treatment and the need for urgent neurological consultation.


2020 ◽  
Vol 89 (4) ◽  
pp. 471-501
Author(s):  
Andreas Glöckner ◽  
Baiba Renerte ◽  
Ulrich Schmidt

Abstract The majority consensus in the empirical literature is that probability weighting functions are typically inverse-S shaped, that is, people tend to overweight small and underweight large probabilities. A separate stream of literature has reported event-splitting effects (also called violations of coalescing) and shown that they can explain violations of expected utility. This leads to the questions whether (1) the observed shape of weighting functions is a mere consequence of the coalesced presentation and, more generally, whether (2) preference elicitation should rely on presenting lotteries in a canonical split form instead of the commonly used coalesced form. We analyze data from a binary choice experiment where all lottery pairs are presented in both split and coalesced forms. Our results show that the presentation in a split form leads to a better fit of expected utility theory and to probability weighting functions that are closer to linear. We thus provide some evidence that the extent of probability weighting is not an ingrained feature, but rather a result of processing difficulties.


Sign in / Sign up

Export Citation Format

Share Document