evaluation dataset
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 38)

H-INDEX

7
(FIVE YEARS 2)

Agriculture ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 26
Author(s):  
Di Zhang ◽  
Feng Pan ◽  
Qi Diao ◽  
Xiaoxue Feng ◽  
Weixing Li ◽  
...  

With the development of unmanned aerial vehicle (UAV), obtaining high-resolution aerial images has become easier. Identifying and locating specific crops from aerial images is a valuable task. The location and quantity of crops are important for agricultural insurance businesses. In this paper, the problem of locating chili seedling crops in large-field UAV images is processed. Two problems are encountered in the location process: a small number of samples and objects in UAV images are similar on a small scale, which increases the location difficulty. A detection framework based on a prototypical network to detect crops in UAV aerial images is proposed. In particular, a method of subcategory slicing is applied to solve the problem, in which objects in aerial images have similarities at a smaller scale. The detection framework is divided into two parts: training and detection. In the training process, crop images are sliced into subcategories, and then these subcategory patch images and background category images are used to train the prototype network. In the detection process, a simple linear iterative clustering superpixel segmentation method is used to generate candidate regions in the UAV image. The location method uses a prototypical network to recognize nine patch images extracted simultaneously. To train and evaluate the proposed method, we construct an evaluation dataset by collecting the images of chilies in a seedling stage by an UAV. We achieve a location accuracy of 96.46%. This study proposes a seedling crop detection framework based on few-shot learning that does not require the use of labeled boxes. It reduces the workload of manual annotation and meets the location needs of seedling crops.


2021 ◽  
Author(s):  
Pairash Saiviroonporn ◽  
Suwimon Wonglaksanapimon ◽  
Warasinee Chaisangmongkon ◽  
Isarun Chamveha ◽  
Pakorn Yodprom ◽  
...  

Abstract Background Artificial Intelligence, particularly the Deep Learning (DL) model, can provide reliable results for automated cardiothoracic ratio (CTR) measurement on Chest x-ray (CXR) images. In everyday clinical use, however, this technology is usually implemented in a non-automated (AI-assisted) capacity because it still requires approval from radiologists. We investigated the performance and efficiency of our recently proposed models for the AI-assisted method intended for clinical practice. Methods We validated four proposed DL models (AlbuNet, SegNet, VGG-11, and VGG-16) to find the best model for clinical implementation using a dataset of 7,517 CXR images from manual operations. These models were investigated in single-model and combined-model modes to find the model with the highest percentage of results where the user could accept the results without further interaction (excellent grade), and with measurement variation within ± 1.8% of the human-operating range. The best model from the validation study was then tested on an evaluation dataset of 9,386 CXR images using the AI-assisted method with two radiologists to measure the yield of excellent grade results, observer variation, and operating time. A Bland-Altman plot with coefficient of variation (CV) was employed to evaluate agreement between measurements. Results The VGG-16 gave the highest excellent grade result (68.9%) of any single-model mode with a CV comparable to manual operation (2.12% vs 2.13%). No DL model produced a failure-grade result. The combined-model mode of AlbuNet+VGG-11 model yielded excellent grades in 82.7% of images and a CV of 1.36%. Using the evaluation dataset, the AlbuNet+VGG-11 model produced excellent grade results in 77.8% of images, a CV of 1.55%, and reduced operating time by almost ten-fold (1.07 ± 2.62 secs vs 10.6 ± 1.5 sec) compared to manual operation. Conclusion Due to its exceptional accuracy and speed, the AlbuNet+VGG-11 model could be clinically implemented to assist radiologists with CTR measurement.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ji Young Yoo ◽  
Se Yoon Kang ◽  
Jong Sun Park ◽  
Young-Jae Cho ◽  
Sung Yong Park ◽  
...  

AbstractAnesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an endotracheal tube. Moreover, the decubitus position is often used in certain surgeries. Although it occurs rarely, the misinterpretation of tube location can cause accidental extubation or endobronchial intubation, which can lead to hyperinflation. Thus, video bronchoscopy with a decision supporting system using artificial intelligence would be useful in the anesthesiologic process. In this study, we aimed to develop an artificial intelligence model robust to rotation and covering using video bronchoscopy images. We collected video bronchoscopic images from an institutional database. Collected images were automatically labeled by an optical character recognition engine as the carina and left/right main bronchus. Except 180 images for the evaluation dataset, 80% were randomly allocated to the training dataset. The remaining images were assigned to the validation and test datasets in a 7:3 ratio. Random image rotation and circular cropping were applied. Ten kinds of pretrained models with < 25 million parameters were trained on the training and validation datasets. The model showing the best prediction accuracy for the test dataset was selected as the final model. Six human experts reviewed the evaluation dataset for the inference of anatomical locations to compare its performance with that of the final model. In the experiments, 8688 images were prepared and assigned to the evaluation (180), training (6806), validation (1191), and test (511) datasets. The EfficientNetB1 model showed the highest accuracy (0.86) and was selected as the final model. For the evaluation dataset, the final model showed better performance (accuracy, 0.84) than almost all human experts (0.38, 0.44, 0.51, 0.68, and 0.63), and only the most-experienced pulmonologist showed performance comparable (0.82) with that of the final model. The performance of human experts was generally proportional to their experiences. The performance difference between anesthesiologists and pulmonologists was marked in discrimination of the right main bronchus. Using bronchoscopic images, our model could distinguish anatomical locations among the carina and both main bronchi under random rotation and covering. The performance was comparable with that of the most-experienced human expert. This model can be a basis for designing a clinical decision support system with video bronchoscopy.


Author(s):  
Weidong Liu ◽  
Shuo Liu ◽  
Donghui Gao ◽  
Rui Wang ◽  
Xuanfei Duan ◽  
...  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Michael Rutherford ◽  
Seong K. Mun ◽  
Betty Levine ◽  
William Bennett ◽  
Kirk Smith ◽  
...  

AbstractWe developed a DICOM dataset that can be used to evaluate the performance of de-identification algorithms. DICOM objects (a total of 1,693 CT, MRI, PET, and digital X-ray images) were selected from datasets published in the Cancer Imaging Archive (TCIA). Synthetic Protected Health Information (PHI) was generated and inserted into selected DICOM Attributes to mimic typical clinical imaging exams. The DICOM Standard and TCIA curation audit logs guided the insertion of synthetic PHI into standard and non-standard DICOM data elements. A TCIA curation team tested the utility of the evaluation dataset. With this publication, the evaluation dataset (containing synthetic PHI) and de-identified evaluation dataset (the result of TCIA curation) are released on TCIA in advance of a competition, sponsored by the National Cancer Institute (NCI), for algorithmic de-identification of medical image datasets. The competition will use a much larger evaluation dataset constructed in the same manner. This paper describes the creation of the evaluation datasets and guidelines for their use.


Author(s):  
Nankai Lin ◽  
Boyu Chen ◽  
Xiaotian Lin ◽  
Kanoksak Wattanachote ◽  
Shengyi Jiang

Grammatical Error Correction (GEC) is a challenge in Natural Language Processing research. Although many researchers have been focusing on GEC in universal languages such as English or Chinese, few studies focus on Indonesian, which is a low-resource language. In this article, we proposed a GEC framework that has the potential to be a baseline method for Indonesian GEC tasks. This framework treats GEC as a multi-classification task. It integrates different language embedding models and deep learning models to correct 10 types of Part of Speech (POS) error in Indonesian text. In addition, we constructed an Indonesian corpus that can be utilized as an evaluation dataset for Indonesian GEC research. Our framework was evaluated on this dataset. Results showed that the Long Short-Term Memory model based on word-embedding achieved the best performance. Its overall macro-average F 0.5 in correcting 10 POS error types reached 0.551. Results also showed that the framework can be trained on a low-resource dataset.


Author(s):  
J A Hall ◽  
R J Harris ◽  
A Zaidi ◽  
S C Woodhall ◽  
G Dabrera ◽  
...  

Abstract Background Household transmission of SARS-CoV-2 is an important component of the community spread of the pandemic. Little is known about the factors associated with household transmission, at the level of the case, contact or household, or how these have varied over the course of the pandemic. Methods The Household Transmission Evaluation Dataset (HOSTED) is a passive surveillance system linking laboratory-confirmed COVID-19 cases to individuals living in the same household in England. We explored the risk of household transmission according to: age of case and contact, sex, region, deprivation, month and household composition between April and September 2020, building a multivariate model. Results In the period studied, on average, 5.5% of household contacts in England were diagnosed as cases. Household transmission was most common between adult cases and contacts of a similar age. There was some evidence of lower transmission rates to under-16s [adjusted odds ratios (aOR) 0.70, 95% confidence interval (CI) 0.66–0.74). There were clear regional differences, with higher rates of household transmission in the north of England and the Midlands. Less deprived areas had a lower risk of household transmission. After controlling for region, there was no effect of deprivation, but houses of multiple occupancy had lower rates of household transmission [aOR 0.74 (0.66–0.83)]. Conclusions Children are less likely to acquire SARS-CoV-2 via household transmission, and consequently there was no difference in the risk of transmission in households with children. Households in which cases could isolate effectively, such as houses of multiple occupancy, had lower rates of household transmission. Policies to support the effective isolation of cases from their household contacts could lower the level of household transmission.


2021 ◽  
Author(s):  
Tianqing Fang ◽  
Weiqi Wang ◽  
Sehyun Choi ◽  
Shibo Hao ◽  
Hongming Zhang ◽  
...  

2021 ◽  
Author(s):  
Matan Orbach ◽  
Orith Toledo-Ronen ◽  
Artem Spector ◽  
Ranit Aharonov ◽  
Yoav Katz ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document