scholarly journals A deep learning-based method for grip strength prediction: Comparison of multilayer perceptron and polynomial regression approaches

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246870
Author(s):  
Jaejin Hwang ◽  
Jinwon Lee ◽  
Kyung-Sun Lee

The objective of this study was to accurately predict the grip strength using a deep learning-based method (e.g., multi-layer perceptron [MLP] regression). The maximal grip strength with varying postures (upper arm, forearm, and lower body) of 164 young adults (100 males and 64 females) were collected. The data set was divided into a training set (90% of data) and a test set (10% of data). Different combinations of variables including demographic and anthropometric information of individual participants and postures was tested and compared to find the most predictive model. The MLP regression and 3 different polynomial regressions (linear, quadratic, and cubic) were conducted and the performance of regression was compared. The results showed that including all variables showed better performance than other combinations of variables. In general, MLP regression showed higher performance than polynomial regressions. Especially, MLP regression considering all variables achieved the highest performance of grip strength prediction (RMSE = 69.01N, R = 0.88, ICC = 0.92). This deep learning-based regression (MLP) would be useful to predict on-site- and individual-specific grip strength in the workspace to reduce the risk of musculoskeletal disorders in the upper extremity.

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>


2018 ◽  
pp. 1-8 ◽  
Author(s):  
Okyaz Eminaga ◽  
Nurettin Eminaga ◽  
Axel Semjonow ◽  
Bernhard Breil

Purpose The recognition of cystoscopic findings remains challenging for young colleagues and depends on the examiner’s skills. Computer-aided diagnosis tools using feature extraction and deep learning show promise as instruments to perform diagnostic classification. Materials and Methods Our study considered 479 patient cases that represented 44 urologic findings. Image color was linearly normalized and was equalized by applying contrast-limited adaptive histogram equalization. Because these findings can be viewed via cystoscopy from every possible angle and side, we ultimately generated images rotated in 10-degree grades and flipped them vertically or horizontally, which resulted in 18,681 images. After image preprocessing, we developed deep convolutional neural network (CNN) models (ResNet50, VGG-19, VGG-16, InceptionV3, and Xception) and evaluated these models using F1 scores. Furthermore, we proposed two CNN concepts: 90%-previous-layer filter size and harmonic-series filter size. A training set (60%), a validation set (10%), and a test set (30%) were randomly generated from the study data set. All models were trained on the training set, validated on the validation set, and evaluated on the test set. Results The Xception-based model achieved the highest F1 score (99.52%), followed by models that were based on ResNet50 (99.48%) and the harmonic-series concept (99.45%). All images with cancer lesions were correctly determined by these models. When the focus was on the images misclassified by the model with the best performance, 7.86% of images that showed bladder stones with indwelling catheter and 1.43% of images that showed bladder diverticulum were falsely classified. Conclusion The results of this study show the potential of deep learning for the diagnostic classification of cystoscopic images. Future work will focus on integration of artificial intelligence–aided cystoscopy into clinical routines and possibly expansion to other clinical endoscopy applications.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 198 ◽  
Author(s):  
Mohamad Hazim Johari ◽  
Hasliza Abu Hassan ◽  
Ahmad Ihsan Mohd Yassin ◽  
Nooritawati Md Tahir ◽  
Azlee Zabidi ◽  
...  

This project presents a method to detect diabetic retinopathy on the fundus images by using deep learning neural network. Alexnet Convolution Neural Network (CNN) has been used in the project to ease the process of neural learning. The data set used were retrieved from MESSIDOR database and it contains 1200 pieces of fundus images. The images were filtered based on the project needed.  There were 580 pieces of images types .tif has been used after filtered and those pictures were divided into 2, which is Exudates images and Normal images. On the training and testing session, the 580 mixed of exudates and normal fundus images were divided into 2 sets which is training set and testing set. The result of the training and testing set were merged into a confusion matrix. The result for this project shows that the accuracy of the CNN for training and testing set was 99.3% and 88.3% respectively.   


2021 ◽  
Vol 9 ◽  
Author(s):  
Noa Rotman-Nativ ◽  
Natan T. Shaked

We present an analysis method that can automatically classify live cancer cells from cell lines based on a small data set of quantitative phase imaging data without cell staining. The method includes spatial image analysis to extract the cell phase spatial fluctuation map, derived from the quantitative phase map of the cell measured without cell labeling, thus without prior knowledge on the biomarker. The spatial fluctuations are indicative of the cell stiffness, where cancer cells change their stiffness as cancer progresses. In this paper, the quantitative phase spatial fluctuations are used as the basis for a deep-learning classifier for evaluating the cell metastatic potential. The spatial fluctuation analysis performed on the quantitative phase profiles before inputting them to the neural network was proven to increase the classification results in comparison to inputting the quantitative phase profiles directly, as done so far. We classified between primary and metastatic cancer cells and obtained 92.5% accuracy, in spite of using a small training set, demonstrating the method potential for objective automatic clinical diagnosis of cancer cells in vitro.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 218-219
Author(s):  
Andres Fernando T Russi ◽  
Mike D Tokach ◽  
Jason C Woodworth ◽  
Joel M DeRouchey ◽  
Robert D Goodband ◽  
...  

Abstract The swine industry has been constantly evolving to select animals with improved performance traits and to minimize variation in body weight (BW) in order to meet packer specifications. Therefore, understanding variation presents an opportunity for producers to find strategies that could help reduce, manage, or deal with variation of pigs in a barn. A systematic review and meta-analysis was conducted by collecting data from multiple studies and available data sets in order to develop prediction equations for coefficient of variation (CV) and standard deviation (SD) as a function of BW. Information regarding BW variation from 16 papers was recorded to provide approximately 204 data points. Together, these data included 117,268 individually weighed pigs with a sample size that ranged from 104 to 4,108 pigs. A random-effects model with study used as a random effect was developed. Observations were weighted using sample size as an estimate for precision on the analysis, where larger data sets accounted for increased accuracy in the model. Regression equations were developed using the nlme package of R to determine the relationship between BW and its variation. Polynomial regression analysis was conducted separately for each variation measurement. When CV was reported in the data set, SD was calculated and vice versa. The resulting prediction equations were: CV (%) = 20.04 – 0.135 × (BW) + 0.00043 × (BW)2, R2=0.79; SD = 0.41 + 0.150 × (BW) - 0.00041 × (BW)2, R2 = 0.95. These equations suggest that there is evidence for a decreasing quadratic relationship between mean CV of a population and BW of pigs whereby the rate of decrease is smaller as mean pig BW increases from birth to market. Conversely, the rate of increase of SD of a population of pigs is smaller as mean pig BW increases from birth to market.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xiaoguo Zhang ◽  
Dawei Wang ◽  
Jiang Shao ◽  
Song Tian ◽  
Weixiong Tan ◽  
...  

AbstractSince its first outbreak, Coronavirus Disease 2019 (COVID-19) has been rapidly spreading worldwide and caused a global pandemic. Rapid and early detection is essential to contain COVID-19. Here, we first developed a deep learning (DL) integrated radiomics model for end-to-end identification of COVID-19 using CT scans and then validated its clinical feasibility. We retrospectively collected CT images of 386 patients (129 with COVID-19 and 257 with other community-acquired pneumonia) from three medical centers to train and externally validate the developed models. A pre-trained DL algorithm was utilized to automatically segment infected lesions (ROIs) on CT images which were used for feature extraction. Five feature selection methods and four machine learning algorithms were utilized to develop radiomics models. Trained with features selected by L1 regularized logistic regression, classifier multi-layer perceptron (MLP) demonstrated the optimal performance with AUC of 0.922 (95% CI 0.856–0.988) and 0.959 (95% CI 0.910–1.000), the same sensitivity of 0.879, and specificity of 0.900 and 0.887 on internal and external testing datasets, which was equivalent to the senior radiologist in a reader study. Additionally, diagnostic time of DL-MLP was more efficient than radiologists (38 s vs 5.15 min). With an adequate performance for identifying COVID-19, DL-MLP may help in screening of suspected cases.


Sign in / Sign up

Export Citation Format

Share Document