automatic methods
Recently Published Documents


TOTAL DOCUMENTS

395
(FIVE YEARS 133)

H-INDEX

25
(FIVE YEARS 5)

2022 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuelun Zhang ◽  
Siyu Liang ◽  
Yunying Feng ◽  
Qing Wang ◽  
Feng Sun ◽  
...  

Abstract Background Systematic review is an indispensable tool for optimal evidence collection and evaluation in evidence-based medicine. However, the explosive increase of the original literatures makes it difficult to accomplish critical appraisal and regular update. Artificial intelligence (AI) algorithms have been applied to automate the literature screening procedure in medical systematic reviews. In these studies, different algorithms were used and results with great variance were reported. It is therefore imperative to systematically review and analyse the developed automatic methods for literature screening and their effectiveness reported in current studies. Methods An electronic search will be conducted using PubMed, Embase, ACM Digital Library, and IEEE Xplore Digital Library databases, as well as literatures found through supplementary search in Google scholar, on automatic methods for literature screening in systematic reviews. Two reviewers will independently conduct the primary screening of the articles and data extraction, in which nonconformities will be solved by discussion with a methodologist. Data will be extracted from eligible studies, including the basic characteristics of study, the information of training set and validation set, and the function and performance of AI algorithms, and summarised in a table. The risk of bias and applicability of the eligible studies will be assessed by the two reviewers independently based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Quantitative analyses, if appropriate, will also be performed. Discussion Automating systematic review process is of great help in reducing workload in evidence-based practice. Results from this systematic review will provide essential summary of the current development of AI algorithms for automatic literature screening in medical evidence synthesis and help to inspire further studies in this field. Systematic review registration PROSPERO CRD42020170815 (28 April 2020).


2022 ◽  
Vol 38 (1) ◽  
Author(s):  
Maria Elisa Quinteros ◽  
Carola Blazquez ◽  
Felipe Rosas ◽  
Salvador Ayala ◽  
Ximena Marcela Ossa García ◽  
...  

Abstract: Automatic geocoding methods have become popular in recent years, facilitating the study of the association between health outcomes and the place of living. However, rather few studies have evaluated geocoding quality, with most of them being performed in the US and Europe. This article aims to compare the quality of three automatic online geocoding tools against a reference method. A subsample of 300 handwritten addresses from hospital records was geocoded using Bing, Google Earth, and Google Maps. Match rates were higher (> 80%) for Google Maps and Google Earth compared with Bing. However, the accuracy of the addresses was better for Bing with a larger proportion (> 70%) of addresses with positional errors below 20m. Generally, performance did not vary for each method for different socioeconomic status. Overall, the methods showed an acceptable, but heterogeneous performance, which may be a warning against the use of automatic methods without assessing quality in other municipalities, particularly in Chile and Latin America.


2021 ◽  
Vol 12 (1) ◽  
pp. 10
Author(s):  
George P. Avramidis ◽  
Maria P. Avramidou ◽  
George A. Papakostas

Rheumatoid arthritis (RA) is a systemic autoimmune disease that preferably affects small joints. As the well-timed diagnosis of the disease is essential for the treatment of the patient, several works have been conducted in the field of deep learning to develop fast and accurate automatic methods for RA diagnosis. These works mainly focus on medical images as they use X-ray and ultrasound images as input for their models. In this study, we review the conducted works and compare the methods that use deep learning with the procedure that is commonly followed by a medical doctor for the RA diagnosis. The results show that 93% of the works use only image modalities as input for the models as distinct from the medical procedure where more patient medical data are taken into account. Moreover, only 15% of the works use direct explainability methods, meaning that the efforts for solving the trustworthiness issue of deep learning models were limited. In this context, this work reveals the gap between the deep learning approaches and the medical doctors’ practices traditionally applied and brings to light the weaknesses of the current deep learning technology to be integrated into a trustworthy context inside the existed medical infrastructures.


Author(s):  
Nicolás José Fernández-Martínez ◽  
Ángel Miguel Felices-Lago

Abstract Traditional corpus-based methods rely on manual inspection and extraction of lexical collocates in the study of selection preferences, which is a very costly, labor-intensive, and time-consuming task. Devising automatic methods for lexical collocate extraction becomes necessary to handle this task and the immensity of corpora available. With a view to leveraging the Sketch Engine platform and in-built corpora, we propose a working prototype of a Lexical Collocate Extractor (LeCoExt) command-line tool that mines lexical collocates from all types of verbs according to their syntactic constituents and Collocate Frequency Score (CFS). This might be the first tool that performs comprehensive corpus-based studies of the selection preferences of individual or groups of verbs exploiting the capabilities offered by Sketch Engine. This tool might facilitate the task of extracting rich lexico-semantic knowledge from diverse corpora in a few seconds and at a click away. We test its performance for ontology building and refinement departing from a previous detailed analysis of stealing verbs carried out by Fernández-Martínez & Faber (2020). We show how the proposed tool is used to extract conceptual-cognitive knowledge from the THEFT scenario and implement it into FunGramKB Core Ontology through the creation and modification of theft-related conceptual units.


2021 ◽  
Vol 4 ◽  
pp. 1-5
Author(s):  
Gergely Vassányi ◽  
Mátyás Gede

Abstract. Archive topographical maps are a key source of geographical information from past ages, which can be valuable for several science fields. Since manual digitization is usually slow and takes much human resource, automatic methods are preferred, such as deep learning algorithms. Although automatic vectorization is a common problem, there have been few approaches regarding point symbols. In this paper, a point symbol vectorization method is proposed, which was tested on Third Military Survey map sheets using a Mask Regional Convolutional Neural Network (MRCNN). The MRCNN implementation uses the ResNet101 network improved with the Feature Pyramid Network architecture and is developed in a Google Colab environment. The pretrained network was trained on four point symbol categories simultaneously. Results show 90% accuracy, while 94% of symbols detected for some categories on the complete test sheet.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Ke Zeng ◽  
Yingqi Hua ◽  
Jing Xu ◽  
Tao Zhang ◽  
Zhuoying Wang ◽  
...  

Knee osteoarthritis (OA) is one of the most common musculoskeletal disorders. OA diagnosis is currently conducted by assessing symptoms and evaluating plain radiographs, but this process suffers from the subjectivity of doctors. In this study, we retrospectively compared five commonly used machine learning methods, especially the CNN network, to predict the real-world X-ray imaging data of knee joints from two different hospitals using Kellgren-Lawrence (K-L) grade of knee OA to help doctors choose proper auxiliary tools. Furthermore, we present attention maps of CNN to highlight the radiological features affecting the network decision. Such information makes the decision process transparent for practitioners, which builds better trust towards such automatic methods and, moreover, reduces the workload of clinicians, especially for remote areas without enough medical staff.


2021 ◽  
Vol 11 (12) ◽  
pp. 1280
Author(s):  
Xenia Butova ◽  
Sergey Shayakhmetov ◽  
Maxim Fedin ◽  
Igor Zolotukhin ◽  
Sergio Gianesini

Consultation prioritization is fundamental in optimal healthcare management and its performance can be helped by artificial intelligence (AI)-dedicated software and by digital medicine in general. The need for remote consultation has been demonstrated not only in the pandemic-induced lock-down but also in rurality conditions for which access to health centers is constantly limited. The term “AI” indicates the use of a computer to simulate human intellectual behavior with minimal human intervention. AI is based on a “machine learning” process or on an artificial neural network. AI provides accurate diagnostic algorithms and personalized treatments in many fields, including oncology, ophthalmology, traumatology, and dermatology. AI can help vascular specialists in diagnostics of peripheral artery disease, cerebrovascular disease, and deep vein thrombosis by analyzing contrast-enhanced magnetic resonance imaging or ultrasound data and in diagnostics of pulmonary embolism on multi-slice computed angiograms. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of chronic venous disease. Nevertheless, data on using AI in this field are still scarce. In this narrative review, the authors discuss available data on AI implementation in arterial and venous disease diagnostics and care.


2021 ◽  
Vol 22 (2) ◽  
pp. 47-61
Author(s):  
Joanna Kruyt ◽  
Štefan Beňuš

Abstract Entrainment is the tendency of people to behave similarly during an interaction. It occurs on different levels of behaviour, including speech, and has been associated with pro-social behaviour and increased rapport. This review paper outlines the current understanding of linguistic entrainment, particularly at the speech level, in individuals with autism spectrum disorder (ASD), a disorder that is associated with social difficulties and unusual prosody. Aberrant entrainment patterns in individuals with ASD could thus contribute to both their perceived unusual prosody and their social difficulties. Studying the relationship between speech entrainment and ASD holds great potential for applied benefits in utilizing this knowledge for pre-screening or diagnosis, monitoring progress longitudinally, and intervention practices. Our findings suggest that research on entrainment in ASD is sparse and exploratory, and the ecological validity of experimental paradigms varies. Moreover, there is little consistency in methodology and results vary between studies, which highlights the need for standardized methods in entrainment research. A promising way to standardize methods, facilitate their use, and extend them to everyday clinical practice, is by implementing automatic methods for speech analysis and adhering to open-science principles.


2021 ◽  
Author(s):  
Luis Antonio González-Montaña

The production of semantic annotations has gained renewed attention due to the development of anatomical ontologies and the documentation of morphological data. Two methods are proposed in this production, differing in their methodological and philosophical approaches: class-based method and instance-based method. The first, the semantic annotations are established as class expressions, while in the second, the annotations incorporate individuals. An empirical evaluation of the above methods was applied in the morphological description of Neotropical species of the genus Lepidocyrtus (Collembola: Entomobryidae: Lepidocyrtinae). The semantic annotations are expressed as RDF triple, which is a language most flexible than the Entity-Quality syntax used commonly in the description of phenotypes. The morphological descriptions were built in Protégé 5.4.0 and stored in an RDF store created with Fuseki Jena. The semantic annotations based on RDF triple increase the interoperability and integration of data from diverse sources, e.g., museum data. However, computational challenges are present, which are related with the development of semi-automatic methods for the generation of RDF triple, interchanging between texts and RDF triple, and the access by non-expert users.


2021 ◽  
Author(s):  
Zhe Li ◽  
Jiayu Yang ◽  
Xinghua Li ◽  
Kunzheng Wang ◽  
Jungang Han ◽  
...  

Abstract Bacnground: Accurate measurement of the femoral neck-shaft angle (NSA) is of great significance for diagnosing hip joint diseases and preoperative planning of total hip arthroplasty. However, the fitting lines of the femoral neck and femoral shaft did not always intersect in 3D space. Thus, it is unclear whether there is a difference between 2D and 3D methods for measuring NSA. Methods: The femoral point cloud datasets from 310 subjects were segmented into three regions, including the femoral head, femoral neck, and femoral shaft using PointNet++. We created a projection plane to simulate the hip anteroposterior radiograph and fitted the femoral neck axis and femoral shaft axis to complete the 2D measurement, while we directly fitted the two axes in space to complete the 3D measurement. Also, we conducted the manual measurement of the NSA. We verified the accuracy of the segmentation and compared the results of the two automatic and manual methods. Results: The Dice coefficient of femoral segmentation reached 0.9746, and MIoU of that was 0.9165. No significant difference was found between any two of the three methods. While comparing the 2D and 3D methods, the average accuracy was 98.00%, and the average error was 2.58°. Conclusion: This paper proposed two accurate and automatic methods to measure the NSA based on a 2D plane and a 3D model respectively. Although the femoral neck and femoral shaft axes did not intersect in 3D space, the NSAs obtained by 2D and 3D methods were basically consistent.


Sign in / Sign up

Export Citation Format

Share Document