Evaluation Of Commercial Test Sets For Use In EMC Surveys At Quiet Rural Sites

Author(s):  
V.P. Arafiles
Diagnostica ◽  
2019 ◽  
Vol 65 (4) ◽  
pp. 193-204
Author(s):  
Johannes Baltasar Hessler ◽  
David Brieber ◽  
Johanna Egle ◽  
Georg Mandler ◽  
Thomas Jahn

Zusammenfassung. Der Auditive Wortlisten Lerntest (AWLT) ist Teil des Test-Sets Kognitive Funktionen Demenz (CFD; Cognitive Functions Dementia) im Rahmen des Wiener Testsystems (WTS). Der AWLT wurde entlang neurolinguistischer Kriterien entwickelt, um Interaktionen zwischen dem kognitiven Status der Testpersonen und den linguistischen Eigenschaften der Lernliste zu reduzieren. Anhand einer nach Alter, Bildung und Geschlecht parallelisierten Stichprobe von gesunden Probandinnen und Probanden ( N = 44) und Patientinnen und Patienten mit Alzheimer Demenz ( N = 44) wurde mit ANOVAs für Messwiederholungen überprüft, inwieweit dieses Konstruktionsziel erreicht wurde. Weiter wurde die Fähigkeit der Hauptvariablen des AWLT untersucht, zwischen diesen Gruppen zu unterscheiden. Es traten Interaktionen mit geringer Effektstärke zwischen linguistischen Eigenschaften und der Diagnose auf. Die Hauptvariablen trennten mit großen Effektstärken Patientinnen und Patienten von Gesunden. Der AWLT scheint bei vergleichbarer differenzieller Validität linguistisch fairer als ähnliche Instrumente zu sein.


2018 ◽  
Vol 21 (5) ◽  
pp. 381-387 ◽  
Author(s):  
Hossein Atabati ◽  
Kobra Zarei ◽  
Hamid Reza Zare-Mehrjardi

Aim and Objective: Human dihydroorotate dehydrogenase (DHODH) catalyzes the fourth stage of the biosynthesis of pyrimidines in cells. Hence it is important to identify suitable inhibitors of DHODH to prevent virus replication. In this study, a quantitative structure-activity relationship was performed to predict the activity of one group of newly synthesized halogenated pyrimidine derivatives as inhibitors of DHODH. Materials and Methods: Molecular structures of halogenated pyrimidine derivatives were drawn in the HyperChem and then molecular descriptors were calculated by DRAGON software. Finally, the most effective descriptors for 32 halogenated pyrimidine derivatives were selected using bee algorithm. Results: The selected descriptors using bee algorithm were applied for modeling. The mean relative error and correlation coefficient were obtained as 2.86% and 0.9627, respectively, while these amounts for the leave one out−cross validation method were calculated as 4.18% and 0.9297, respectively. The external validation was also conducted using two training and test sets. The correlation coefficients for the training and test sets were obtained as 0.9596 and 0.9185, respectively. Conclusion: The results of modeling of present work showed that bee algorithm has good performance for variable selection in QSAR studies and its results were better than the constructed model with the selected descriptors using the genetic algorithm method.


2020 ◽  
Vol 24 (6) ◽  
pp. 1311-1328
Author(s):  
Jozsef Suto

Nowadays there are hundreds of thousands known plant species on the Earth and many are still unknown yet. The process of plant classification can be performed using different ways but the most popular approach is based on plant leaf characteristics. Most types of plants have unique leaf characteristics such as shape, color, and texture. Since machine learning and vision considerably developed in the past decade, automatic plant species (or leaf) recognition has become possible. Recently, the automated leaf classification is a standalone research area inside machine learning and several shallow and deep methods were proposed to recognize leaf types. From 2007 to present days several research papers have been published in this topic. In older studies the classifier was a shallow method while in current works many researchers applied deep networks for classification. During the overview of plant leaf classification literature, we found an interesting deficiency (lack of hyper-parameter search) and a key difference between studies (different test sets). This work gives an overall review about the efficiency of shallow and deep methods under different test conditions. It can be a basis to further research.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 126
Author(s):  
Sharu Theresa Jose ◽  
Osvaldo Simeone

Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii79-ii79
Author(s):  
Kathryn Nevel ◽  
Samuel Capouch ◽  
Lisa Arnold ◽  
Katherine Peters ◽  
Nimish Mohile ◽  
...  

Abstract BACKGROUND Patients in rural communities have less access to optimal cancer care and clinical trials. For GBM, access to experimental therapies, and consideration of a clinical trial is embedded in national guidelines. Still, the availability of clinical trials to rural communities, representing 20% of the US population, has not been described. METHODS We queried ClinicalTrials.gov for glioblastoma interventional treatment trials opened between 1/2010 and 1/2020 in the United States. We created a Structured Query Language database and leveraged Google application programming interfaces (API) Places to find name and street addresses for the sites, and Google’s Geocode API to determine the county location. Counties were classified by US Department of Agriculture Rural-Urban Continuum Codes (RUCC 1–3 = urban and RUCC 4–9 = rural). We used z-ratios for rural-urban statistical comparisons. RESULTS We identified 406 interventional treatment trials for GBM at 1491 unique sites. 8.7% of unique sites were in rural settings. Rural sites opened an average of 1.7 trials/site and urban sites 2.8 trials/site from 1/2010–1/2020. Rural sites offered more phase II trials (63% vs 57%, p= 0.03) and fewer phase I trials (22% vs 28%, p= 0.01) than urban sites. Rural locations were more likely to offer federally-sponsored trials (p< 0.002). There were no investigator-initiated or single-institution trials offered at rural locations, and only 1% of industry trials were offered rurally. DISCUSSION Clinical trials for GBM were rarely open in rural areas, and were more dependent on federal funding. Clinical trials are likely difficult to access for rural patients, and this has important implications for the generalizability of research as well as how we engage the field of neuro-oncology and patient advocacy groups in improving patient access to trials. Increasing the number of clinical trials in rural locations may enable more rural patients to access and enroll in GBM studies.


2021 ◽  
Vol 13 (10) ◽  
pp. 1877
Author(s):  
Ukkyo Jeong ◽  
Hyunkee Hong

Since April 2018, the TROPOspheric Monitoring Instrument (TROPOMI) has provided data on tropospheric NO2 column concentrations (CTROPOMI) with unprecedented spatial resolution. This study aims to assess the capability of TROPOMI to acquire high spatial resolution data regarding surface NO2 mixing ratios. In general, the instrument effectively detected major and moderate sources of NO2 over South Korea with a clear weekday–weekend distinction. We compared the CTROPOMI with surface NO2 mixing ratio measurements from an extensive ground-based network over South Korea operated by the Korean Ministry of Environment (SKME; more than 570 sites), for 2019. Spatiotemporally collocated CTROPOMI and SKME showed a moderate correlation (correlation coefficient, r = 0.67), whereas their annual mean values at each site showed a higher correlation (r = 0.84). The CTROPOMI and SKME were well correlated around the Seoul metropolitan area, where significant amounts of NO2 prevailed throughout the year, whereas they showed lower correlation at rural sites. We converted the tropospheric NO2 from TROPOMI to the surface mixing ratio (STROPOMI) using the EAC4 (ECMWF Atmospheric Composition Reanalysis 4) profile shape, for quantitative comparison with the SKME. The estimated STROPOMI generally underestimated the in-situ value obtained, SKME (slope = 0.64), as reported in previous studies.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bo Sun ◽  
Fei Zhang ◽  
Jing Li ◽  
Yicheng Yang ◽  
Xiaolin Diao ◽  
...  

Abstract Background With the development and application of medical information system, semantic interoperability is essential for accurate and advanced health-related computing and electronic health record (EHR) information sharing. The openEHR approach can improve semantic interoperability. One key improvement of openEHR is that it allows for the use of existing archetypes. The crucial problem is how to improve the precision and resolve ambiguity in the archetype retrieval. Method Based on the query expansion technology and Word2Vec model in Nature Language Processing (NLP), we propose to find synonyms as substitutes for original search terms in archetype retrieval. Test sets in different medical professional level are used to verify the feasibility. Result Applying the approach to each original search term (n = 120) in test sets, a total of 69,348 substitutes were constructed. Precision at 5 (P@5) was improved by 0.767, on average. For the best result, the P@5 was up to 0.975. Conclusions We introduce a novel approach that using NLP technology and corpus to find synonyms as substitutes for original search terms. Compared to simply mapping the element contained in openEHR to an external dictionary, this approach could greatly improve precision and resolve ambiguity in retrieval tasks. This is helpful to promote the application of openEHR and advance EHR information sharing.


Sign in / Sign up

Export Citation Format

Share Document