On comprehending a computer manual: analysis of variables affecting performance

1988 ◽  
Vol 19 (3) ◽  
pp. 247
Keyword(s):  
2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Gema Alcaraz-Mármol ◽  
Jorge Soto-Almela

AbstractThe dehumanization of migrants and refugees in the media has been the object of numerous critical discourse analyses and metaphor-based studies which have primarily dealt with English written news articles. This paper, however, addresses the dehumanizing language which is used to refer to refugees in a 1.8-million-word corpus of Spanish news articles collected from the digital libraries of El Mundo and El País, the two most widely read Spanish newspapers. Our research particularly aims to explore how the dehumanization of the lemma refugiado is constructed through the identification of semantic preferences. It is concerned with synchronic and diachronic aspects, offering results on the evolution of refugees’ dehumanization from 2010 to 2016. The dehumanizing collocates are determined via a corpus-based analysis, followed by a detailed manual analysis conducted in order to label the different collocates of refugiado semantically and classify them into more specific semantic subsets. The results show that the lemma refugiado usually collocates with dehumanizing words that express, by frequency order, quantification, out-of-control phenomenon, objectification, and economic burden. The analysis also demonstrates that the collocates corresponding to these four semantic subsets are unusually frequent in the 2015–16 period, giving rise to seasonal collocates strongly related to the Syrian civil war and other Middle-East armed conflicts.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Zhao ◽  
E Ferdian ◽  
GD Maso Talou ◽  
GM Quill ◽  
K Gilbert ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): National Heart Foundation (NHF) of New Zealand Health Research Council (HRC) of New Zealand Artificial intelligence shows considerable promise for automated analysis and interpretation of medical images, particularly in the domain of cardiovascular imaging. While application to cardiac magnetic resonance (CMR) has demonstrated excellent results, automated analysis of 3D echocardiography (3D-echo) remains challenging, due to the lower signal-to-noise ratio (SNR), signal dropout, and greater interobserver variability in manual annotations. As 3D-echo is becoming increasingly widespread, robust analysis methods will substantially benefit patient evaluation.  We sought to leverage the high SNR of CMR to provide training data for a convolutional neural network (CNN) capable of analysing 3D-echo. We imaged 73 participants (53 healthy volunteers, 20 patients with non-ischaemic cardiac disease) under both CMR and 3D-echo (<1 hour between scans). 3D models of the left ventricle (LV) were independently constructed from CMR and 3D-echo, and used to spatially align the image volumes using least squares fitting to a cardiac template. The resultant transformation was used to map the CMR mesh to the 3D-echo image. Alignment of mesh and image was verified through volume slicing and visual inspection (Fig. 1) for 120 paired datasets (including 47 rescans) each at end-diastole and end-systole. 100 datasets (80 for training, 20 for validation) were used to train a shallow CNN for mesh extraction from 3D-echo, optimised with a composite loss function consisting of normalised Euclidian distance (for 290 mesh points) and volume. Data augmentation was applied in the form of rotations and tilts (<15 degrees) about the long axis. The network was tested on the remaining 20 datasets (different participants) of varying image quality (Tab. I). For comparison, corresponding LV measurements from conventional manual analysis of 3D-echo and associated interobserver variability (for two observers) were also estimated. Initial results indicate that the use of embedded CMR meshes as training data for 3D-echo analysis is a promising alternative to manual analysis, with improved accuracy and precision compared with conventional methods. Further optimisations and a larger dataset are expected to improve network performance. (n = 20) LV EDV (ml) LV ESV (ml) LV EF (%) LV mass (g) Ground truth CMR 150.5 ± 29.5 57.9 ± 12.7 61.5 ± 3.4 128.1 ± 29.8 Algorithm error -13.3 ± 15.7 -1.4 ± 7.6 -2.8 ± 5.5 0.1 ± 20.9 Manual error -30.1 ± 21.0 -15.1 ± 12.4 3.0 ± 5.0 Not available Interobserver error 19.1 ± 14.3 14.4 ± 7.6 -6.4 ± 4.8 Not available Tab. 1. LV mass and volume differences (means ± standard deviations) for 20 test cases. Algorithm: CNN – CMR (as ground truth). Abstract Figure. Fig 1. CMR mesh registered to 3D-echo.


Author(s):  
Dominika Kováříková ◽  
Michal Škrabal ◽  
Václav Cvrček ◽  
Lucie Lukešová ◽  
Jiří Milička

Abstract When compiling a list of headwords, every lexicographer comes across words with an unattested representative dictionary form in the data. This study focuses on how to distinguish between the cases when this form is missing due to a lack of data and when there are some systemic or linguistic reasons. We have formulated lexicographic recommendations for different types of such ‘lacunas’ based on our research carried out on Czech written corpora. As a prerequisite, we calculated a frequency threshold to find words that should have the representative form attested in the data. Based on a manual analysis of 2,700 nouns, adjectives and verbs that do not, we drew up a classification of lacunas. The reasons for a missing dictionary form are often associated with limited collocability and non-preference for the representative grammatical category. Findings on unattested word forms also have significant implications for language potentiality.


2014 ◽  
Vol 644-650 ◽  
pp. 2952-2956
Author(s):  
Jian Guo Jiang ◽  
Xin Jian Ma ◽  
Xin Liang Qiu ◽  
Min Yu ◽  
Chao Liu

Automatic analysis of malware is a hot topic in recent years. While many methods were proposed it was still a challenge for automatic identification of malware. For example, scoring was commonly used to indicate threat scale of samples, but this metric was given by manual processing in most case. In this paper, a method to automatically generate the score of analyzed sample was proposed. Combine this method and practical problem, we tested up to 639 samples and got a correctness of 97.3%. Experimental result showed that this method could correctly indicate the threat scale of samples. The results of this paper can also offer some tips for manual analysis.


ChemPhysChem ◽  
2021 ◽  
Author(s):  
Sebastian Günther ◽  
Patrick Zeller ◽  
Bernhard Böller ◽  
Joost Wintterlin
Keyword(s):  

Membranes ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 860
Author(s):  
Zvonimir Boban ◽  
Ivan Mardešić ◽  
Witold Karol Subczynski ◽  
Marija Raguz

Since its inception more than thirty years ago, electroformation has become the most commonly used method for growing giant unilamellar vesicles (GUVs). Although the method seems quite straightforward at first, researchers must consider the interplay of a large number of parameters, different lipid compositions, and internal solutions in order to avoid artifactual results or reproducibility problems. These issues motivated us to write a short review of the most recent methodological developments and possible pitfalls. Additionally, since traditional manual analysis can lead to biased results, we have included a discussion on methods for automatic analysis of GUVs. Finally, we discuss possible improvements in the preparation of GUVs containing high cholesterol contents in order to avoid the formation of artifactual cholesterol crystals. We intend this review to be a reference for those trying to decide what parameters to use as well as an overview providing insight into problems not yet addressed or solved.


Sign in / Sign up

Export Citation Format

Share Document