A Deep Learning Methodology for Automatic Assessment of Portrait Image Aesthetic Quality

Author(s):  
Poom Wettayakorn ◽  
Siripong Traivijitkhun ◽  
Ponpat Phetchai ◽  
Suppawong Tuarob
2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Xue Chen ◽  
Yuanyuan Shi ◽  
Yanjun Wang ◽  
Yuanjuan Cheng

This paper mainly introduces the relevant contents of automatic assessment of upper limb mobility after stroke, including the relevant knowledge of clinical assessment of upper limb mobility, Kinect sensor to realize spatial location tracking of upper limb bone points, and GCRNN model construction process. Through the detailed analysis of all FMA evaluation items, a unique experimental data acquisition environment and evaluation tasks were set up, and the results of FMA prediction using bone point data of each evaluation task were obtained. Through different number and combination of tasks, the best coefficient of determination was achieved when task 1, task 2, and task 5 were simultaneously used as input for FMA prediction. At the same time, in order to verify the superior performance of the proposed method, a comparative experiment was set with LSTM, CNN, and other deep learning algorithms widely used. Conclusion. GCRNN was able to extract the motion features of the upper limb during the process of movement from the two dimensions of space and time and finally reached the best prediction performance with a coefficient of determination of 0.89.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 497
Author(s):  
Sébastien Villon ◽  
Corina Iovan ◽  
Morgan Mangeas ◽  
Laurent Vigliola

With the availability of low-cost and efficient digital cameras, ecologists can now survey the world’s biodiversity through image sensors, especially in the previously rather inaccessible marine realm. However, the data rapidly accumulates, and ecologists face a data processing bottleneck. While computer vision has long been used as a tool to speed up image processing, it is only since the breakthrough of deep learning (DL) algorithms that the revolution in the automatic assessment of biodiversity by video recording can be considered. However, current applications of DL models to biodiversity monitoring do not consider some universal rules of biodiversity, especially rules on the distribution of species abundance, species rarity and ecosystem openness. Yet, these rules imply three issues for deep learning applications: the imbalance of long-tail datasets biases the training of DL models; scarce data greatly lessens the performances of DL models for classes with few data. Finally, the open-world issue implies that objects that are absent from the training dataset are incorrectly classified in the application dataset. Promising solutions to these issues are discussed, including data augmentation, data generation, cross-entropy modification, few-shot learning and open set recognition. At a time when biodiversity faces the immense challenges of climate change and the Anthropocene defaunation, stronger collaboration between computer scientists and ecologists is urgently needed to unlock the automatic monitoring of biodiversity.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243253
Author(s):  
Qiang Lin ◽  
Mingyang Luo ◽  
Ruiting Gao ◽  
Tongtong Li ◽  
Zhengxing Man ◽  
...  

SPECT imaging has been identified as an effective medical modality for diagnosis, treatment, evaluation and prevention of a range of serious diseases and medical conditions. Bone SPECT scan has the potential to provide more accurate assessment of disease stage and severity. Segmenting hotspot in bone SPECT images plays a crucial role to calculate metrics like tumor uptake and metabolic tumor burden. Deep learning techniques especially the convolutional neural networks have been widely exploited for reliable segmentation of hotspots or lesions, organs and tissues in the traditional structural medical images (i.e., CT and MRI) due to their ability of automatically learning the features from images in an optimal way. In order to segment hotspots in bone SPECT images for automatic assessment of metastasis, in this work, we develop several deep learning based segmentation models. Specifically, each original whole-body bone SPECT image is processed to extract the thorax area, followed by image mirror, translation and rotation operations, which augments the original dataset. We then build segmentation models based on two commonly-used famous deep networks including U-Net and Mask R-CNN by fine-tuning their structures. Experimental evaluation conducted on a group of real-world bone SEPCT images reveals that the built segmentation models are workable on identifying and segmenting hotspots of metastasis in bone SEPCT images, achieving a value of 0.9920, 0.7721, 0.6788 and 0.6103 for PA (accuracy), CPA (precision), Rec (recall) and IoU, respectively. Finally, we conclude that the deep learning technology have the huge potential to identify and segment hotspots in bone SPECT images.


2019 ◽  
Author(s):  
Alexander Rakhlin ◽  
Aleksei Tiulpin ◽  
Alexey A. Shvets ◽  
Alexandr A. Kalinin ◽  
Vladimir I. Iglovikov ◽  
...  

AbstractBreast cancer is one of the main causes of death world-wide. Histopathological cellularity assessment of residual tumors in post-surgical tissues is used to analyze a tumor’s response to a therapy. Correct cellularity assessment increases the chances of getting an appropriate treatment and facilitates the patient’s survival. In current clinical practice, tumor cellularity is manually estimated by pathologists; this process is tedious and prone to errors or low agreement rates between assessors. In this work, we evaluated three strong novel Deep Learning-based approaches for automatic assessment of tumor cellularity from post-treated breast surgical specimens stained with hematoxylin and eosin. We validated the proposed methods on the BreastPathQ SPIE challenge dataset that consisted of 2395 image patches selected from whole slide images acquired from 64 patients. Compared to expert pathologist scoring, our best performing method yielded the Cohen’s kappa coefficient of 0.69 (vs. 0.42 previously known in literature) and the intra-class correlation coefficient of 0.89 (vs. 0.83). Our results suggest that Deep Learning-based methods have a significant potential to alleviate the burden on pathologists, enhance the diagnostic workflow, and, thereby, facilitate better clinical outcomes in breast cancer treatment.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chuancheng Zhu ◽  
Yusong Hu ◽  
Hude Mao ◽  
Shumin Li ◽  
Fangfang Li ◽  
...  

The stomatal index of the leaf is the ratio of the number of stomata to the total number of stomata and epidermal cells. Comparing with the stomatal density, the stomatal index is relatively constant in environmental conditions and the age of the leaf and, therefore, of diagnostic characteristics for a given genotype or species. Traditional assessment methods involve manual counting of the number of stomata and epidermal cells in microphotographs, which is labor-intensive and time-consuming. Although several automatic measurement algorithms of stomatal density have been proposed, no stomatal index pipelines are currently available. The main aim of this research is to develop an automated stomatal index measurement pipeline. The proposed method employed Faster regions with convolutional neural networks (R-CNN) and U-Net and image-processing techniques to count stomata and epidermal cells, and subsequently calculate the stomatal index. To improve the labeling speed, a semi-automatic strategy was employed for epidermal cell annotation in each micrograph. Benchmarking the pipeline on 1,000 microscopic images of leaf epidermis in the wheat dataset (Triticum aestivum L.), the average counting accuracies of 98.03 and 95.03% for stomata and epidermal cells, respectively, and the final measurement accuracy of the stomatal index of 95.35% was achieved. R2 values between automatic and manual measurement of stomata, epidermal cells, and stomatal index were 0.995, 0.983, and 0.895, respectively. The average running time (ART) for the entire pipeline could be as short as 0.32 s per microphotograph. The proposed pipeline also achieved a good transferability on the other families of the plant using transfer learning, with the mean counting accuracies of 94.36 and 91.13% for stomata and epidermal cells and the stomatal index accuracy of 89.38% in seven families of the plant. The pipeline is an automatic, rapid, and accurate tool for the stomatal index measurement, enabling high-throughput phenotyping, and facilitating further understanding of the stomatal and epidermal development for the plant physiology community. To the best of our knowledge, this is the first deep learning-based microphotograph analysis pipeline for stomatal index assessment.


Cancers ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 6138
Author(s):  
Pritesh Mehta ◽  
Michela Antonelli ◽  
Saurabh Singh ◽  
Natalia Grondecka ◽  
Edward W. Johnston ◽  
...  

Multiparametric magnetic resonance imaging (mpMRI) of the prostate is used by radiologists to identify, score, and stage abnormalities that may correspond to clinically significant prostate cancer (CSPCa). Automatic assessment of prostate mpMRI using artificial intelligence algorithms may facilitate a reduction in missed cancers and unnecessary biopsies, an increase in inter-observer agreement between radiologists, and an improvement in reporting quality. In this work, we introduce AutoProstate, a deep learning-powered framework for automatic MRI-based prostate cancer assessment. AutoProstate comprises of three modules: Zone-Segmenter, CSPCa-Segmenter, and Report-Generator. Zone-Segmenter segments the prostatic zones on T2-weighted imaging, CSPCa-Segmenter detects and segments CSPCa lesions using biparametric MRI, and Report-Generator generates an automatic web-based report containing four sections: Patient Details, Prostate Size and PSA Density, Clinically Significant Lesion Candidates, and Findings Summary. In our experiment, AutoProstate was trained using the publicly available PROSTATEx dataset, and externally validated using the PICTURE dataset. Moreover, the performance of AutoProstate was compared to the performance of an experienced radiologist who prospectively read PICTURE dataset cases. In comparison to the radiologist, AutoProstate showed statistically significant improvements in prostate volume and prostate-specific antigen density estimation. Furthermore, AutoProstate matched the CSPCa lesion detection sensitivity of the radiologist, which is paramount, but produced more false positive detections.


Diagnostics ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 803
Author(s):  
Luu-Ngoc Do ◽  
Byung Hyun Baek ◽  
Seul Kee Kim ◽  
Hyung-Jeong Yang ◽  
Ilwoo Park ◽  
...  

The early detection and rapid quantification of acute ischemic lesions play pivotal roles in stroke management. We developed a deep learning algorithm for the automatic binary classification of the Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) using diffusion-weighted imaging (DWI) in acute stroke patients. Three hundred and ninety DWI datasets with acute anterior circulation stroke were included. A classifier algorithm utilizing a recurrent residual convolutional neural network (RRCNN) was developed for classification between low (1–6) and high (7–10) DWI-ASPECTS groups. The model performance was compared with a pre-trained VGG16, Inception V3, and a 3D convolutional neural network (3DCNN). The proposed RRCNN model demonstrated higher performance than the pre-trained models and 3DCNN with an accuracy of 87.3%, AUC of 0.941, and F1-score of 0.888 for classification between the low and high DWI-ASPECTS groups. These results suggest that the deep learning algorithm developed in this study can provide a rapid assessment of DWI-ASPECTS and may serve as an ancillary tool that can assist physicians in making urgent clinical decisions.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 28
Author(s):  
Alejandro Puente-Castro ◽  
Cristian Robert Munteanu ◽  
Enrique Fernandez-Blanco

Automatic detection of Alzheimer’s disease is a very active area of research. This is due to its usefulness in starting the protocol to stop the inevitable progression of this neurodegenerative disease. This paper proposes a system for the detection of the disease by means of Deep Learning techniques in magnetic resonance imaging (MRI). As a solution, a model of neuronal networks (ANN) and two sets of reference data for training are proposed. Finally, the goodness of this system is verified within the domain of the application.


Author(s):  
Bruno Silva ◽  
Ines Pessanha ◽  
Jorge Correia-Pinto ◽  
Jaime C. Fonseca ◽  
Sandro Queiros

2021 ◽  
Author(s):  
Ammar Hoori ◽  
Tao Hu ◽  
Juhwan Lee ◽  
Sadeer Al-Kindi ◽  
Sanjay Rajagopalan ◽  
...  

Abstract Epicardial adipose tissue volume (EAT) has been linked to coronary artery disease and the risk of major adverse cardiac events. As manual quantification of EAT is time-consuming, requires specialized training, and is prone to human error, we developed a method (DeepFat) for the automatic assessment of EAT on non-contrast low-dose CT calcium score images using deep learning. We segmented the tissue enclosed by the pericardial sac on axial slices, using two innovations. First, we applied a HU‑attention-window with a window/level 350/40-HU to draw attention to the sac and reduce numerical errors. Second, we applied look ahead slab-of-slices with bisection (“bisect”) in which we split the heart into halves and sequenced the lower half from bottom-to-middle and the upper half from top-to-middle, thereby presenting an always increasing curvature of the sac to the network. EAT volume was obtained by thresholding voxels within the sac in the fat window (-190/-30-HU). Compared to manual segmentation, our algorithm gave excellent results with volume Dice=88.52%±3.3, slice Dice=87.70%±7.5, EAT error=0.5%±8.1, and R=98.52%(p<0.001). HU-attention-window and bisect improved Dice volume scores by 0.49% and 3.2% absolute, respectively. Extensive augmentation improved results. Variability between analysts was comparable to variability with DeepFat. Results compared favorably to those of previous publications.


Sign in / Sign up

Export Citation Format

Share Document