scholarly journals Advanced Kidney Volume Measurement Method Using Ultrasonography with Artificial Intelligence-Based Hybrid Learning in Children

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6846
Author(s):  
Dong-Wook Kim ◽  
Hong-Gi Ahn ◽  
Jeeyoung Kim ◽  
Choon-Sik Yoon ◽  
Ji-Hong Kim ◽  
...  

In this study, we aimed to develop a new automated method for kidney volume measurement in children using ultrasonography (US) with image pre-processing and hybrid learning and to formulate an equation to calculate the expected kidney volume. The volumes of 282 kidneys (141 subjects, <19 years old) with normal function and structure were measured using US. The volumes of 58 kidneys in 29 subjects who underwent US and computed tomography (CT) were determined by image segmentation and compared to those calculated by the conventional ellipsoidal method and CT using intraclass correlation coefficients (ICCs). An expected kidney volume equation was developed using multivariate regression analysis. Manual image segmentation was automated using hybrid learning to calculate the kidney volume. The ICCs for volume determined by image segmentation and ellipsoidal method were significantly different, while that for volume calculated by hybrid learning was significantly higher than that for ellipsoidal method. Volume determined by image segmentation was significantly correlated with weight, body surface area, and height. Expected kidney volume was calculated as (2.22 × weight (kg) + 0.252 × height (cm) + 5.138). This method will be valuable in establishing an age-matched normal kidney growth chart through the accumulation and analysis of large-scale data.

2020 ◽  
Vol 12 (3) ◽  
pp. 485 ◽  
Author(s):  
Xuecheng Wang ◽  
Xing Gao ◽  
Xiaoyan Zhang ◽  
Wei Wang ◽  
Fei Yang

Surface ice/snow is a vital resource and is sensitive to climate change in many parts of the world. The accurate and timely measurement of the spatial distribution of ice/snow is critical for managing water resources. Object-oriented and pixel-oriented methods often have some limitations due to the image segmentation scale, the determination of the optimal threshold and background heterogeneity. Therefore, this study proposes a method for automatically extracting large-scale surface ice/snow from Landsat series images, which takes advantage of the combination of image segmentation, the watershed algorithm and a series of ice/snow indices. We tested our novel method in three different regions in the Karakoram Mountains, and the experimental results show that the produced ice/snow map obtained a user’s accuracy greater than 90%, a producer’s accuracy greater than 97%, an overall accuracy greater than 98% and a kappa coefficient greater than 0.93. Comparing the extraction results under segmentation scales of 10, 15, 20 and 25, the user’s accuracy and producer’s accuracy from the proposed method are very similar, which indicates that the proposed method is more reliable and stable for extracting ice/snow objects than the object-oriented method. Due to the different reflectivity values in the near-infrared band in the snow and water categories, the normalized difference forest snow index (NDFSI) is suitable for Landsat TM and ETM+ images. This study can serve as a reliable, scientific reference for rapidly and accurately extracting ice/snow objects.


2021 ◽  
pp. 1-19
Author(s):  
Maria Tamoor ◽  
Irfan Younas

Medical image segmentation is a key step to assist diagnosis of several diseases, and accuracy of a segmentation method is important for further treatments of different diseases. Different medical imaging modalities have different challenges such as intensity inhomogeneity, noise, low contrast, and ill-defined boundaries, which make automated segmentation a difficult task. To handle these issues, we propose a new fully automated method for medical image segmentation, which utilizes the advantages of thresholding and an active contour model. In this study, a Harris Hawks optimizer is applied to determine the optimal thresholding value, which is used to obtain the initial contour for segmentation. The obtained contour is further refined by using a spatially varying Gaussian kernel in the active contour model. The proposed method is then validated using a standard skin dataset (ISBI 2016), which consists of variable-sized lesions and different challenging artifacts, and a standard cardiac magnetic resonance dataset (ACDC, MICCAI 2017) with a wide spectrum of normal hearts, congenital heart diseases, and cardiac dysfunction. Experimental results show that the proposed method can effectively segment the region of interest and produce superior segmentation results for skin (overall Dice Score 0.90) and cardiac dataset (overall Dice Score 0.93), as compared to other state-of-the-art algorithms.


2016 ◽  
Vol 40 (6) ◽  
pp. 500-525 ◽  
Author(s):  
Ben Kelcey ◽  
Zuchao Shen ◽  
Jessaca Spybrook

Objective: Over the past two decades, the lack of reliable empirical evidence concerning the effectiveness of educational interventions has motivated a new wave of research in education in sub-Saharan Africa (and across most of the world) that focuses on impact evaluation through rigorous research designs such as experiments. Often these experiments draw on the random assignment of entire clusters, such as schools, to accommodate the multilevel structure of schooling and the theory of action underlying many school-based interventions. Planning effective and efficient school randomized studies, however, requires plausible values of the intraclass correlation coefficient (ICC) and the variance explained by covariates during the design stage. The purpose of this study was to improve the planning of two-level school-randomized studies in sub-Saharan Africa by providing empirical estimates of the ICC and the variance explained by covariates for education outcomes in 15 countries. Method: Our investigation drew on large-scale representative samples of sixth-grade students in 15 countries in sub-Saharan Africa and includes over 60,000 students across 2,500 schools. We examined two core education outcomes: standardized achievement in reading and mathematics. We estimated a series of two-level hierarchical linear models with students nested within schools to inform the design of two-level school-randomized trials. Results: The analyses suggested that outcomes were substantially clustered within schools but that the magnitude of the clustering varied considerably across countries. Similarly, the results indicated that covariance adjustment generally reduced clustering but that the prognostic value of such adjustment varied across countries.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


2011 ◽  
Vol 90-93 ◽  
pp. 2836-2839 ◽  
Author(s):  
Jian Cui ◽  
Dong Ling Ma ◽  
Ming Yang Yu ◽  
Ying Zhou

In order to extract ground information more accurately, it is important to find an image segmentation method to make the segmented features match the ground objects. We proposed an image segmentation method based on mean shift and region merging. With this method, we first segmented the image by using mean shift method and small-scale parameters. According to the region merging homogeneity rule, image features were merged and large-scale image layers were generated. What’s more, Multi-level image object layers were created through scaling method. The test of segmenting remote sensing images showed that the method was effective and feasible, which laid a foundation for object-oriented information extraction.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Raj Bridgelall ◽  
Pan Lu ◽  
Denver D. Tolliver ◽  
Tai Xu

On-demand shared mobility services such as Uber and microtransit are steadily penetrating the worldwide market for traditional dispatched taxi services. Hence, taxi companies are seeking ways to compete. This study mined large-scale mobility data from connected taxis to discover beneficial patterns that may inform strategies to improve dispatch taxi business. It is not practical to manually clean and filter large-scale mobility data that contains GPS information. Therefore, this research contributes and demonstrates an automated method of data cleaning and filtering that is suitable for such types of datasets. The cleaning method defines three filter variables and applies a layered statistical filtering technique to eliminate outlier records that do not contribute to distributions that match expected theoretical distributions of the variables. Chi-squared statistical tests evaluate the quality of the cleaned data by comparing the distribution of the three variables with their expected distributions. The overall cleaning method removed approximately 5% of the data, which consisted of errors that were obvious and others that were poor quality outliers. Subsequently, mining the cleaned data revealed that trip production in Dubai peaks for the case when only the same two drivers operate the same taxi. This finding would not have been possible without access to proprietary data that contains unique identifiers for both drivers and taxis. Datasets that identify individual drivers are not publicly available.


2021 ◽  
pp. 1-7
Author(s):  
Ido Ben Zvi ◽  
Oren Shaia Harel ◽  
Amos Douvdevani ◽  
Penina Weiss ◽  
Chen Cohen ◽  
...  

OBJECTIVE Mild traumatic brain injury (mTBI) is a major cause of emergency room (ER) admission. Thirty percent of mTBI patients have postconcussion syndrome (PCS), and 15% have symptoms for over a year. This population is underdiagnosed and does not receive appropriate care. The authors proposed a fast and inexpensive fluorometric measurement of circulating cell-free DNA (cfDNA) as a biomarker for PCS. cfDNA is a proven, useful marker of a variety of acute pathological conditions such as trauma and acute illness. METHODS Thirty mTBI patients were recruited for this prospective single-center trial. At admission, patients completed questionnaires and blood was drawn to obtain cfDNA. At 3–4 months after injury, 18 patients returned for cognitive assessments with questionnaires and the Color Trails Test (CTT). The fast SYBR Gold assay was used to measure cfDNA. RESULTS Seventeen men and 13 women participated in this trial. The mean ± SD age was 50.9 ± 13.9 years. Of the 18 patients who returned for cognitive assessment, one-third reported working fewer hours, 4 (22.2%) changed their driving patterns, and 5 (27.7%) reduced or stopped performing physical activity. The median cfDNA level of the mTBI group was greater than that of the matched healthy control group (730.5 vs 521.5 ng/ml, p = 0.0395). Admission cfDNA concentration was negatively correlated with performance on the CTT1 and CTT2 standardized tests (r = −0.559 and −0.599), meaning that greater cfDNA level was correlated with decreased cognitive performance status. The performance of the patients with normal cfDNA level included in the mTBI group was similar to that of the healthy participants. In contrast, the increased cfDNA group (> 800 ng/ml) had lower scores on the CTT tests than the normal cfDNA group (p < 0.001). Furthermore, patients with moderate/severe cognitive impairment according to CTT1 results had a greater median cfDNA level than the patients with scores indicating mild impairment or normal function (1186 vs 473.5 ng/ml, p = 0.0441, area under the receiver operating characteristic curve = 0.8393). CONCLUSIONS The data from this pilot study show the potential to use cfDNA, as measured with a fast test, as a biomarker to screen for PCS in the ER. A large-scale study is required to establish the value of cfDNA as an early predictor of PCS.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
George Crowley ◽  
Sophia Kwon ◽  
Erin J. Caraher ◽  
Syed Hissam Haider ◽  
Rachel Lam ◽  
...  

Abstract Background Quantifying morphologic changes is critical to our understanding of the pathophysiology of the lung. Mean linear intercept (MLI) measures are important in the assessment of clinically relevant pathology, such as emphysema. However, qualitative measures are prone to error and bias, while quantitative methods such as mean linear intercept (MLI) are manually time consuming. Furthermore, a fully automated, reliable method of assessment is nontrivial and resource-intensive. Methods We propose a semi-automated method to quantify MLI that does not require specialized computer knowledge and uses a free, open-source image-processor (Fiji). We tested the method with a computer-generated, idealized dataset, derived an MLI usage guide, and successfully applied this method to a murine model of particulate matter (PM) exposure. Fields of randomly placed, uniform-radius circles were analyzed. Optimal numbers of chords to assess based on MLI were found via receiver-operator-characteristic (ROC)-area under the curve (AUC) analysis. Intraclass correlation coefficient (ICC) measured reliability. Results We demonstrate high accuracy (AUCROC > 0.8 for MLIactual > 63.83 pixels) and excellent reliability (ICC = 0.9998, p < 0.0001). We provide a guide to optimize the number of chords to sample based on MLI. Processing time was 0.03 s/image. We showed elevated MLI in PM-exposed mice compared to PBS-exposed controls. We have also provided the macros that were used and have made an ImageJ plugin available free for academic research use at https://med.nyu.edu/nolanlab. Conclusions Our semi-automated method is reliable, equally fast as fully automated methods, and uses free, open-source software. Additionally, we quantified the optimal number of chords that should be measured per lung field.


Sign in / Sign up

Export Citation Format

Share Document