scholarly journals Evaluation of Unsupervised Entity and Event Salience Estimation

Author(s):  
Jiaying Lu ◽  
Jinho D Choi

Salience Estimation aims to predict term importance in documents.Due to few existing human-annotated datasets and the subjective notion of salience, previous studies typically generate pseudo-ground truth for evaluation. However, our investigation reveals that the evaluation protocol proposed by prior work is difficult to replicate, thus leading to few follow-up studies existing. Moreover, the evaluation process is problematic: the entity linking tool used for entity matching is very noisy, while the ignorance of event argument for event evaluation leads to boosted performance. In this work, we propose a light yet practical entity and event salience estimation evaluation protocol, which incorporates the more reliable syntactic dependency parser. Furthermore, we conduct a comprehensive analysis among popular entity and event definition standards, and present our own definition for the Salience Estimation task to reduce noise during the pseudo-ground truth generation process. Furthermore, we construct dependency-based heterogeneous graphs to capture the interactions of entities and events. The empirical results show that both baseline methods and the novel GNN method utilizing the heterogeneous graph consistently outperform the previous SOTA model in all proposed metrics.

2021 ◽  
Vol 11 (22) ◽  
pp. 10966
Author(s):  
Hsiang-Chieh Chen ◽  
Zheng-Ting Li

This article introduces an automated data-labeling approach for generating crack ground truths (GTs) within concrete images. The main algorithm includes generating first-round GTs, pre-training a deep learning-based model, and generating second-round GTs. On the basis of the generated second-round GTs of the training data, a learning-based crack detection model can be trained in a self-supervised manner. The pre-trained deep learning-based model is effective for crack detection after it is re-trained using the second-round GTs. The main contribution of this study is the proposal of an automated GT generation process for training a crack detection model at the pixel level. Experimental results show that the second-round GTs are similar to manually marked labels. Accordingly, the cost of implementing learning-based methods is reduced significantly because data labeling by humans is not necessitated.


2020 ◽  
Vol 77 (4) ◽  
pp. 1609-1622
Author(s):  
Franziska Mathies ◽  
Catharina Lange ◽  
Anja Mäurer ◽  
Ivayla Apostolova ◽  
Susanne Klutmann ◽  
...  

Background: Positron emission tomography (PET) of the brain with 2-[F-18]-fluoro-2-deoxy-D-glucose (FDG) is widely used for the etiological diagnosis of clinically uncertain cognitive impairment (CUCI). Acute full-blown delirium can cause reversible alterations of FDG uptake that mimic neurodegenerative disease. Objective: This study tested whether delirium in remission affects the performance of FDG PET for differentiation between neurodegenerative and non-neurodegenerative etiology of CUCI. Methods: The study included 88 patients (82.0±5.7 y) with newly detected CUCI during hospitalization in a geriatric unit. Twenty-seven (31%) of the patients were diagnosed with delirium during their current hospital stay, which, however, at time of enrollment was in remission so that delirium was not considered the primary cause of the CUCI. Cases were categorized as neurodegenerative or non-neurodegenerative etiology based on visual inspection of FDG PET. The diagnosis at clinical follow-up after ≥12 months served as ground truth to evaluate the diagnostic performance of FDG PET. Results: FDG PET was categorized as neurodegenerative in 51 (58%) of the patients. Follow-up after 16±3 months was obtained in 68 (77%) of the patients. The clinical follow-up diagnosis confirmed the FDG PET-based categorization in 60 patients (88%, 4 false negative and 4 false positive cases with respect to detection of neurodegeneration). The fraction of correct PET-based categorization did not differ between patients with delirium in remission and patients without delirium (86% versus 89%, p = 0.666). Conclusion: Brain FDG PET is useful for the etiological diagnosis of CUCI in hospitalized geriatric patients, as well as in patients with delirium in remission.


Infection ◽  
2021 ◽  
Author(s):  
Ali Hamady ◽  
JinJu Lee ◽  
Zuzanna A. Loboda

Abstract Objectives The coronavirus disease 2019 (COVID-19), caused by the novel betacoronavirus severe acute respiratory syndrome 2 (SARS-CoV-2), was declared a pandemic in March 2020. Due to the continuing surge in incidence and mortality globally, determining whether protective, long-term immunity develops after initial infection or vaccination has become critical. Methods/Results In this narrative review, we evaluate the latest understanding of antibody-mediated immunity to SARS-CoV-2 and to other coronaviruses (SARS-CoV, Middle East respiratory syndrome coronavirus and the four endemic human coronaviruses) in order to predict the consequences of antibody waning on long-term immunity against SARS-CoV-2. We summarise their antibody dynamics, including the potential effects of cross-reactivity and antibody waning on vaccination and other public health strategies. At present, based on our comparison with other coronaviruses we estimate that natural antibody-mediated protection for SARS-CoV-2 is likely to last for 1–2 years and therefore, if vaccine-induced antibodies follow a similar course, booster doses may be required. However, other factors such as memory B- and T-cells and new viral strains will also affect the duration of both natural and vaccine-mediated immunity. Conclusion Overall, antibody titres required for protection are yet to be established and inaccuracies of serological methods may be affecting this. We expect that with standardisation of serological testing and studies with longer follow-up, the implications of antibody waning will become clearer.


FACE ◽  
2021 ◽  
pp. 273250162097932
Author(s):  
Naikhoba C. O. Munabi ◽  
Eric S. Nagengast ◽  
Gary Parker ◽  
Shaillendra A. Magdum ◽  
Mirjam Hamer ◽  
...  

Background: Large frontoencephaloceles, more common in low and middle-income countries, require complex reconstruction of cerebral herniation, elongated nose, telecanthus, and cephalic frontal bone rotation. Previously described techniques involve multiple osteotomies, often fail to address cephalad brow rotation, and have high complication rates including up to 35% mortality. This study presents a novel, modified, single-staged technique for frontoencephalocele reconstruction performed by Mercy Ships. This technique, which addresses functional and aesthetic concerns with minimal osteotomies, may help improve outcomes in low resources settings. Methods: Retrospective review was performed of patients who underwent frontoencephalocele reconstruction through Mercy Ships using the technique described. Patient data including country, age, gender, associated diagnoses, and prior interventions were reviewed. Intraoperative and post-operative complications were recorded. Results: Eight patients with frontoencephalocele (ages 4-14 years) underwent surgery with the novel technique in 4 countries. Average surgical time was 6.0 ± 0.9 hours. No intraoperative complications occurred. Post-operatively 1 patient experienced lumbar drain dislodgement requiring replacement and a second had early post-operative fall requiring reoperation for hardware replacement. In person follow-up to 2.4 months showed no additional complications. Follow-up via phone at 1 to 2 years post-op revealed all patients who be satisfied with surgical outcomes. Conclusions: Reconstruction of large frontoencephaloceles can be challenging due to the need for functional closure of the defect and craniofacial reconstruction to correct medial hypertelorism, long nose deformity, and cephalad forehead rotation. The novel surgical technique presented in this paper allows for reliable reconstruction of functional and aesthetic needs with simplified incision design, osteotomies, and bandeau manipulation.


1997 ◽  
Vol 52 (7) ◽  
pp. 851-858 ◽  
Author(s):  
Gunther Seitz ◽  
Johanna Siegl

The anomeric imido esters 5 and 6, appropriate precursors for C-nucleoside synthesis, were prepared and utilized as heterodienophiles in a Diels-Alder reaction with inverse electron demand to yield the novel, protected 1.2.4-triazine C-nucleosides 8 and 9. They could be deprotected by treatment with 70% trifluoroacetic acid to furnish the free C-nucleosides 10 and 11. The triazine „aglycon“ of 8 contains an electron deficient diazadiene system, highly activated to react with various electron rich dienophiles such as enamines, enol ethers and several cyclic ketene acetals in an „inverse“ [4+2]-cycloaddition reaction. The Diels-Alder adducts spontaneously eliminate N2 and after follow-up reactions the O-TBDPS protected pyridine-C-nucleosides 13, 15, 17,19, 21 and 23 are formed. Removal of the protecting group by treatment with CF3CO2H /H2O leads to the corresponding 2’,3’-dideoxy-β-D-ribofuranosyl- pyridines.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Heidi Luise Schulte ◽  
José Diego Brito-Sousa ◽  
Marcus Vinicius Guimarães Lacerda ◽  
Luciana Ansaneli Naves ◽  
Eliana Teles de Gois ◽  
...  

Abstract Background Since the novel coronavirus disease outbreak, over 179.7 million people have been infected by SARS-CoV-2 worldwide, including the population living in dengue-endemic regions, particularly Latin America and Southeast Asia, raising concern about the impact of possible co-infections. Methods Thirteen SARS-CoV-2/DENV co-infection cases reported in Midwestern Brazil between April and September of 2020 are described. Information was gathered from hospital medical records regarding the most relevant clinical and laboratory findings, diagnostic process, therapeutic interventions, together with clinician-assessed outcomes and follow-up. Results Of the 13 cases, seven patients presented Acute Undifferentiated Febrile Syndrome and six had pre-existing co-morbidities, such as diabetes, hypertension and hypopituitarism. Two patients were pregnant. The most common symptoms and clinical signs reported at first evaluation were myalgia, fever and dyspnea. In six cases, the initial diagnosis was dengue fever, which delayed the diagnosis of concomitant infections. The most frequently applied therapeutic interventions were antibiotics and analgesics. In total, four patients were hospitalized. None of them were transferred to the intensive care unit or died. Clinical improvement was verified in all patients after a maximum of 21 days. Conclusions The cases reported here highlight the challenges in differential diagnosis and the importance of considering concomitant infections, especially to improve clinical management and possible prevention measures. Failure to consider a SARS-CoV-2/DENV co-infection may impact both individual and community levels, especially in endemic areas.


Open Medicine ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. 749-753
Author(s):  
Wenyuan Li ◽  
Beibei Huang ◽  
Qiang Shen ◽  
Shouwei Jiang ◽  
Kun Jin ◽  
...  

Abstract In recent months, the novel coronavirus disease 2019 (COVID-19) pandemic has become a major public health crisis with takeover more than 1 million lives worldwide. The long-lasting existence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has not yet been reported. Herein, we report a case of SARS-CoV-2 infection with intermittent viral polymerase chain reaction (PCR)-positive for >4 months after clinical rehabilitation. A 35-year-old male was diagnosed with COVID-19 pneumonia with fever but without other specific symptoms. The treatment with lopinavir-ritonavir, oxygen inhalation, and other symptomatic supportive treatment facilitated recovery, and the patient was discharged. However, his viral PCR test was continually positive in oropharyngeal swabs for >4 months after that. At the end of June 2020, he was still under quarantine and observation. The contribution of current antivirus therapy might be limited. The prognosis of COVID-19 patients might be irrelevant to the virus status. Thus, further investigation to evaluate the contagiousness of convalescent patients and the mechanism underlying the persistent existence of SARS-CoV-2 after recovery is essential. A new strategy of disease control, especially extending the follow-up period for recovered COVID-19 patients, is necessary to adapt to the current situation of pandemic.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Julie C. Lauffenburger ◽  
Thomas Isaac ◽  
Lorenzo Trippa ◽  
Punam Keller ◽  
Ted Robertson ◽  
...  

Abstract Background The prescribing of high-risk medications to older adults remains extremely common and results in potentially avoidable health consequences. Efforts to reduce prescribing have had limited success, in part because they have been sub-optimally timed, poorly designed, or not provided actionable information. Electronic health record (EHR)-based tools are commonly used but have had limited application in facilitating deprescribing in older adults. The objective is to determine whether designing EHR tools using behavioral science principles reduces inappropriate prescribing and clinical outcomes in older adults. Methods The Novel Uses of Designs to Guide provider Engagement in Electronic Health Records (NUDGE-EHR) project uses a two-stage, 16-arm adaptive randomized pragmatic trial with a “pick-the-winner” design to identify the most effective of many potential EHR tools among primary care providers and their patients ≥ 65 years chronically using benzodiazepines, sedative hypnotic (“Z-drugs”), or anticholinergics in a large integrated delivery system. In stage 1, we randomized providers and their patients to usual care (n = 81 providers) or one of 15 EHR tools (n = 8 providers per arm) designed using behavioral principles including salience, choice architecture, or defaulting. After 6 months of follow-up, we will rank order the arms based upon their impact on the trial’s primary outcome (for both stages): reduction in inappropriate prescribing (via discontinuation or tapering). In stage 2, we will randomize (a) stage 1 usual care providers in a 1:1 ratio to one of the up to 5 most promising stage 1 interventions or continue usual care and (b) stage 1 providers in the unselected arms in a 1:1 ratio to one of the 5 most promising interventions or usual care. Secondary and tertiary outcomes include quantities of medication prescribed and utilized and clinically significant adverse outcomes. Discussion Stage 1 launched in October 2020. We plan to complete stage 2 follow-up in December 2021. These results will advance understanding about how behavioral science can optimize EHR decision support to improve prescribing and health outcomes. Adaptive trials have rarely been used in implementation science, so these findings also provide insight into how trials in this field could be more efficiently conducted. Trial registration Clinicaltrials.gov (NCT04284553, registered: February 26, 2020)


2021 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Roland Perko ◽  
Manfred Klopschitz ◽  
Alexander Almer ◽  
Peter M. Roth

Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols.


2019 ◽  
Vol 53 (1) ◽  
pp. 38-39
Author(s):  
Anjie Fang

Recently, political events, such as elections, have raised a lot of discussions on social media networks, in particular, Twitter. This brings new opportunities for social scientists to address social science tasks, such as understanding what communities said or identifying whether a community has an influence on another. However, identifying these communities and extracting what they said from social media data are challenging and non-trivial tasks. We aim to make progress towards understanding 'who' (i.e. communities) said 'what' (i.e. discussed topics) and 'when' (i.e. time) during political events on Twitter. While identifying the 'who' can benefit from Twitter user community classification approaches, 'what' they said and 'when' can be effectively addressed on Twitter by extracting their discussed topics using topic modelling approaches that also account for the importance of time on Twitter. To evaluate the quality of these topics, it is necessary to investigate how coherent these topics are to humans. Accordingly, we propose a series of approaches in this thesis. First, we investigate how to effectively evaluate the coherence of the topics generated using a topic modelling approach. The topic coherence metric evaluates the topical coherence by examining the semantic similarity among words in a topic. We argue that the semantic similarity of words in tweets can be effectively captured by using word embeddings trained using a Twitter background dataset. Through a user study, we demonstrate that our proposed word embedding-based topic coherence metric can assess the coherence of topics like humans [1, 2]. In addition, inspired by the precision at k metric, we propose to evaluate the coherence of a topic model (containing many topics) by averaging the top-ranked topics within the topic model [3]. Our proposed metrics can not only evaluate the coherence of topics and topic models, but also can help users to choose the most coherent topics. Second, we aim to extract topics with a high coherence from Twitter data. Such topics can be easily interpreted by humans and they can assist to examine 'what' has been discussed and 'when'. Indeed, we argue that topics can be discussed in different time periods (see [4]) and therefore can be effectively identified and distinguished by considering their time periods. Hence, we propose an effective time-sensitive topic modelling approach by integrating the time dimension of tweets (i.e. 'when') [5]. We show that the time dimension helps to generate topics with a high coherence. Hence, we argue that 'what' has been discussed and 'when' can be effectively addressed by our proposed time-sensitive topic modelling approach. Next, to identify 'who' participated in the topic discussions, we propose approaches to identify the community affiliations of Twitter users, including automatic ground-truth generation approaches and a user community classification approach. We show that the mentioned hashtags and entities in the users' tweets can indicate which community a Twitter user belongs to. Hence, we argue that they can be used to generate the ground-truth data for classifying users into communities. On the other hand, we argue that different communities favour different topic discussions and their community affiliations can be identified by leveraging the discussed topics. Accordingly, we propose a Topic-Based Naive Bayes (TBNB) classification approach to classify Twitter users based on their words and discussed topics [6]. We demonstrate that our TBNB classifier together with the ground-truth generation approaches can effectively identify the community affiliations of Twitter users. Finally, to show the generalisation of our approaches, we apply our approaches to analyse 3.6 million tweets related to US Election 2016 on Twitter [7]. We show that our TBNB approach can effectively identify the 'who', i.e. classify Twitter users into communities. To investigate 'what' these communities have discussed, we apply our time-sensitive topic modelling approach to extract coherent topics. We finally analyse the community-related topics evaluated and selected using our proposed topic coherence metrics. Overall, we contribute to provide effective approaches to assist social scientists towards analysing political events on Twitter. These approaches include topic coherence metrics, a time-sensitive topic modelling approach and approaches for classifying the community affiliations of Twitter users. Together they make progress to study and understand the connections and dynamics among communities on Twitter. Supervisors : Iadh Ounis, Craig Macdonald, Philip Habel The thesis is available at http://theses.gla.ac.uk/41135/


Sign in / Sign up

Export Citation Format

Share Document