classification test
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 16)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pairash Saiviroonporn ◽  
Kanchanaporn Rodbangyang ◽  
Trongtum Tongdee ◽  
Warasinee Chaisangmongkon ◽  
Pakorn Yodprom ◽  
...  

Abstract Background Artificial Intelligence (AI) is a promising tool for cardiothoracic ratio (CTR) measurement that has been technically validated but not clinically evaluated on a large dataset. We observed and validated AI and manual methods for CTR measurement using a large dataset and investigated the clinical utility of the AI method. Methods Five thousand normal chest x-rays and 2,517 images with cardiomegaly and CTR values, were analyzed using manual, AI-assisted, and AI-only methods. AI-only methods obtained CTR values from a VGG-16 U-Net model. An in-house software was used to aid the manual and AI-assisted measurements and to record operating time. Intra and inter-observer experiments were performed on manual and AI-assisted methods and the averages were used in a method variation study. AI outcomes were graded in the AI-assisted method as excellent (accepted by both users independently), good (required adjustment), and poor (failed outcome). Bland–Altman plot with coefficient of variation (CV), and coefficient of determination (R-squared) were used to evaluate agreement and correlation between measurements. Finally, the performance of a cardiomegaly classification test was evaluated using a CTR cutoff at the standard (0.5), optimum, and maximum sensitivity. Results Manual CTR measurements on cardiomegaly data were comparable to previous radiologist reports (CV of 2.13% vs 2.04%). The observer and method variations from the AI-only method were about three times higher than from the manual method (CV of 5.78% vs 2.13%). AI assistance resulted in 40% excellent, 56% good, and 4% poor grading. AI assistance significantly improved agreement on inter-observer measurement compared to manual methods (CV; bias: 1.72%; − 0.61% vs 2.13%; − 1.62%) and was faster to perform (2.2 ± 2.4 secs vs 10.6 ± 1.5 secs). The R-squared and classification-test were not reliable indicators to verify that the AI-only method could replace manual operation. Conclusions AI alone is not yet suitable to replace manual operations due to its high variation, but it is useful to assist the radiologist because it can reduce observer variation and operation time. Agreement of measurement should be used to compare AI and manual methods, rather than R-square or classification performance tests.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


2021 ◽  
Author(s):  
Pairash Saiviroonporn ◽  
Kanchanaporn Rodbangyang ◽  
Trongtum Tongdee ◽  
Warasinee Chaisangmongkon ◽  
Pakorn Yodprom ◽  
...  

Abstract Background Artificial Intelligence (AI) technique for cardiothoracic ratio (CTR) measurement is a promising tool that has been technically validated but not clinically evaluated on a large dataset. This study observes and validates AI and manual methods for CTR measurement on a large dataset and investigates the clinical utility of the AI method. Results Five thousand normal chest x-rays and 2,517 images with cardiomegaly and CTR values, were analyzed using manual, AI-assisted, and AI only methods. AI methods obtained CTR values from a VGG-16 U-Net model. An in-house software was used to aid the study and to record measurement time. Intra and inter-observer experiments were performed on manual and AI-assisted methods and the average of each method was employed in a method variation study. AI outcomes were graded in the AI-assisted method as excellent (accepted by both users independently), good (required adjustment), and poor (failed outcome). Bland-Altman plot with coefficient of variation (CV), and coefficient of determination (R-squared) were employed to evaluate agreement and correlation between measurements. Finally, the performance of a cardiomegaly classification test was evaluated using a CTR cutoff at the standard (0.5), optimum, and maximum sensitivity. Manual CTR measurements on cardiomegaly data were comparable to the previous radiologist reports (CV of 2.13% vs 2.04%). The observer and method variations from the AI method were about three times higher than from the manual method (CV of 5.78% vs 2.13%). AI assistance resulted in 40% excellent, 56% good, and 4% poor grading. AI assistance significantly improved agreement on inter-observer measurement compared to manual methods (CV; bias: 1.72%; -0.61% vs 2.13%; -1.62%) and was faster to perform (2.2 ± 2.4 secs vs 10.6 ± 1.5 secs). R-squared and classification-test were not reliable indicators to verify that the AI method could replace manual operation. Conclusion AI alone is not suitable to replace manual operation due to its high variation, but it is useful to assist the radiologist because it can reduce observer variation and operation time. Agreement of measurement should be used to compare AI and manual methods, rather than R-square or classification performance tests.


2021 ◽  
Vol 2 (2) ◽  
pp. 53-63
Author(s):  
Ina Kurnia Sari ◽  
Nur Muniroh

Tomato is a fruit that grows in many tropical and subtropical areas. Tomatoes ripen very quickly, so improper handling can cause them to rot quickly. Distribution of tomatoes over long distances can cause quality degradation which can affect nutritional value. Farmers have many weaknesses to identify manual tomato ripeness due to factors such as fatigue, lack of motivation, experience, proficiency and so on. To solve this problem, the development of information technology allows identification of fruit maturity and even detection of fruit types with the help of computers. With the digital image, technology-based tomato maturity classification can be carried out. Therefore, in this study, the application of tomato maturity classification was carried out by applying the RGB average method to make it easier to determine the level of maturity of tomatoes. In this tomato maturity classification application, several processes are carried out, namely image reading, cropping, segmentation and RGB average calculation. There were 24 images of ripe tomatoes and 25 images of raw tomatoes used in the classification test for tomato maturity and the success rate was 95%.


2020 ◽  
Author(s):  
Irune Fernandez-Prieto ◽  
Ferran Pons ◽  
Jordi Navarra

Crossmodal correspondences between auditory pitch and spatial elevation have been demonstrated extensively in adults. High- and low-pitched sounds tend to be mapped onto upper and lower spatial positions, respectively. We hypothesised that this crossmodal link could be influenced by the development of spatial and linguistic abilities during childhood. To explore this possibility, 70 children (9-12 years old) divided into three groups (4th, 5th and 6th grade of primary school) completed a crossmodal test to evaluate the perceptual correspondence between pure tones and spatial elevation. Additionally, we addressed possible correlations between the students’ performance in this crossmodal task and other auditory, spatial and linguistic measures. The participants’ auditory pitch performance was measured in a frequency classification test. The participants also completed three tests of the Wechsler Intelligence Scale for Children-IV (WISC-IV): (1) Vocabulary, to assess verbal intelligence, (2) Matrix reasoning, to measure visuospatial reasoning and (3) Blocks design, to analyse visuospatial/motor skills. The results revealed crossmodal effects between pitch and spatial elevation. Additionally, we found a correlation between the performance in the block design subtest with the pitch-elevation crossmodal correspondence and the auditory frequency classification test. No correlation was observed between auditory tasks with matrix and vocabulary subtests. This suggests (1) that the crossmodal correspondence between pitch and spatial elevation is already consolidated at the age of 9 and also (2) that good performance in a pitch-based auditory task is mildly associated, in childhood, with good performance in visuospatial/motor tasks.


2020 ◽  
Author(s):  
Aram Ter-Sarkisov

AbstractWe introduce a lightweight model based on Mask R-CNN with ResNet18 and ResNet34 backbone models that segments lesions and predicts COVID-19 from chest CT scans in a single shot. The model requires a small dataset to train: 650 images for the segmentation branch and 3000 for the classification branch, and it is evaluated on 21292 images to achieve a 42.45% average precision (main MS COCO criterion) on the segmentation test split (100 images), 93.00% COVID-19 sensitivity and F1-score of 96.76% on the classification test split (21192 images) across 3 classes: COVID-19, Common Pneumonia and Control/Negative. The full source code, models and pretrained weights are available on https://github.com/AlexTS1980/COVID-Single-Shot-Model.


2020 ◽  
Author(s):  
Aram Ter-Sarkisov

Abstract We introduce a lightweight model based on Mask R-CNN with ResNet18 and ResNet34 backbone models that segments lesions and predicts COVID-19 from chest CT scans in a single shot. The model requires a small dataset to train: 650 images for the segmentation branch and 3000 for the classification branch, and it is evaluated on 21292 images to achieve a 42.45% average precision (main MS COCO criterion) on the segmentation test split (100 images), 93.00% COVID-19 sensitivity and F1-score of 96.76% on the classification test split (21192 images) across 3 classes: COVID-19, Common Pneumonia and Control/Negative. The full source code, models and pretrained weights are available on https://github.com/AlexTS1980/COVID-Single-Shot-Model.


2020 ◽  
Vol 44 (7-8) ◽  
pp. 499-514
Author(s):  
Yi Zheng ◽  
Hyunjung Cheon ◽  
Charles M. Katz

This study explores advanced techniques in machine learning to develop a short tree-based adaptive classification test based on an existing lengthy instrument. A case study was carried out for an assessment of risk for juvenile delinquency. Two unique facts of this case are (a) the items in the original instrument measure a large number of distinctive constructs; (b) the target outcomes are of low prevalence, which renders imbalanced training data. Due to the high dimensionality of the items, traditional item response theory (IRT)-based adaptive testing approaches may not work well, whereas decision trees, which are developed in the machine learning discipline, present as a promising alternative solution for adaptive tests. A cross-validation study was carried out to compare eight tree-based adaptive test constructions with five benchmark methods using data from a sample of 3,975 subjects. The findings reveal that the best-performing tree-based adaptive tests yielded better classification accuracy than the benchmark method IRT scoring with optimal cutpoints, and yielded comparable or better classification accuracy than the best benchmark method, random forest with balanced sampling. The competitive classification accuracy of the tree-based adaptive tests also come with an over 30-fold reduction in the length of the instrument, only administering between 3 to 6 items to any individual. This study suggests that tree-based adaptive tests have an enormous potential when used to shorten instruments that measure a large variety of constructs.


Sign in / Sign up

Export Citation Format

Share Document