Machine learning-based approach for segmentation of intervertebral disc degeneration from lumbar section of spine using MRI images

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jayashri V. Shinde ◽  
Yashwant V. Joshi ◽  
Ramchandra R. Manthalkar ◽  
Joshi

Abstract Objectives Intervertebral disc segmentation is one of the methods to diagnose spinal disease through the degeneration in asymptomatic and symptomatic patients. Even though numerous intervertebral disc segmentation techniques are available, classifying the grades in the intervertebral disc is a hectic challenge in the existing disc segmentation methods. Thus, an effective Whale Spine-Generative Adversarial Network (WSpine-GAN) method is proposed to segment the intervertebral disc for effective grade classification. Methods The proposed WSpine-GAN method effectively performs the disc segmentation, wherein the weights of Spine-GAN are optimally tuned using Whale Optimization Algorithm (WOA). Then, the refined disc features, such as pixel-based features and the connectivity features are extracted. Finally, the K-Nearest Neighbor (KNN) classifier based on the pfirrmann’s grading system performs the grade classification. Results The implementation of the grade classification strategy based on the proposed WSpine-GAN and KNN is performed using the real-time database, and the performance based on the metrics yielded the accuracy, true positive rate (TPR), and false positive rate (FPR) values of 97.778, 97.83, and 0.586% for the training percentage and 92.382, 90.580, and 1.972% for the K-fold value. Conclusions The proposed WSpine-GAN method effectively performs the disc segmentation by integrating the Spine-GANmethod and WOA. Here, the spinal cord images are segmented using the proposed WSpine-GAN method by tuning the weights optimally to enhance the performance of the disc segmentation.

Internet of Things (IoT) is a new Paradiagram in the network technology. It has the vast application in almost every field like retail, industries, and healthcare etc. It has challenges like security and privacy, robustness, weak links, less power, etc. A major challenge among these is security. Due to the weak connectivity links, these Internet of Things network leads to many attacks in the network layer. RPL is a routing protocol which establishes a path particularly for the constrained nodes in Internet of Things based networks. These RPL based network is exposed to many attacks like black hole attack, wormhole attack, sinkhole attack, rank attack, etc. This paper proposed a detection technique for rank attack based on the machine learning approach called MLTKNN, based on K-nearest neighbor algorithm. The proposed technique was simulated in the Cooja simulation with 30 motes and calculated the true positive rate and false positive rate of the proposed detection mechanism. Finally proved that, the performance of the proposed technique was efficient in terms of the delay, packet delivery rate and in detection of the rank attack.


2021 ◽  
Author(s):  
Jianlin Han ◽  
Dan Wang ◽  
Zairan Li ◽  
Nilanjan Dey ◽  
Fuqian Shi

Abstract The number of layers of deep learning (DL) increases, and following the performance of computing nodes improvement, the output accuracy of deep neural networks (DNN) faces a bottleneck problem. The resident network (RN) based DNN model was applied to address these issues recently. This paper improved the RN and developed a rectified linear unit (ReLU) based conditional generative adversarial nets (cGAN) to classify plantar pressure images. A foot scan system collected the plantar pressure images, in which normal (N), planus (PL), and talipes equinovarus feet (TE) data-sets were acquired subsequently. The 9-foot types named N, PL, TE, N-PL, N-TE, PL-N, PL-TE, TE-N, and TE-PL were classified using the proposed DNN models, named resident network-based conditional generative adversarial nets (RNcGAN). It improved the RN structure firstly and the cGAN system hereafter. In the classification of plantar pressure images, the pixel-level state matrix can be direct as an input, which is different from the previous image classification task with image reduction and feature extraction. cGAN can directly output the pixels of the image without any simplification. Finally, the model achieved better results in the evaluation indicators of accuracy (AC), sensitivity (SE), and F1-measurement (F1) by comparing to artificial neural networks (ANN), k-nearest neighbor (kNN), Fast Region-based Convolution Neural Network (Fast R-CNN), visual geometry group (VGG16), scaled-conjugate-gradient convolution neural networks (SCG-CNN), GoogleNet, AlexNet, ResNet-50-177, and Inception-v3. The final prediction of class accuracy is 95.17%. Foot type classification is vital for producing comfortable shoes in the industry.


2020 ◽  
Vol 5 (7) ◽  
pp. 61 ◽  
Author(s):  
Nicholas Fiorentini ◽  
Massimo Losa

Crash severity is undoubtedly a fundamental aspect of a crash event. Although machine learning algorithms for predicting crash severity have recently gained interest by the academic community, there is a significant trend towards neglecting the fact that crash datasets are acutely imbalanced. Overlooking this fact generally leads to weak classifiers for predicting the minority class (crashes with higher severity). In this paper, in order to handle imbalanced accident datasets and provide a better prediction for the minority class, the random undersampling the majority class (RUMC) technique is used. By employing an imbalanced and a RUMC-based balanced training set, we propose the calibration, validation, and evaluation of four different crash severity predictive models, including random tree, k-nearest neighbor, logistic regression, and random forest. Accuracy, true positive rate (recall), false positive rate, true negative rate, precision, F1-score, and the confusion matrix have been calculated to assess the performance. Outcomes show that RUMC-based models provide an enhancement in the reliability of the classifiers for detecting fatal crashes and those causing injury. Indeed, in imbalanced models, the true positive rate for predicting fatal crashes and those causing injury spans from 0% (logistic regression) to 18.3% (k-nearest neighbor), while for the RUMC-based models, it spans from 52.5% (RUMC-based logistic regression) to 57.2% (RUMC-based k-nearest neighbor). Organizations and decision-makers could make use of RUMC and machine learning algorithms in predicting the severity of a crash occurrence, managing the present, and planning the future of their works.


2018 ◽  
Author(s):  
Darian H. Hadjiabadi

AbstractDendritic size and branching patterns are important features of neural form and function. However, current computational models of neuronal networks use simplistic cylindrical geometries to mimic dendritic arborizations. One reason for this is that current methods to generate dendritic trees have rigid a priori constraints. To address this, a deep convolutional generative adversarial network (DCGAN) trained on images of rodent hippocampal granule and pyramidal dendritic trees. Image features learned by the network were used to generate realistic dendritic morphologies. Results show that DCGANs achieved greater stability∗ and high generalization, as quantified by kernel maximum mean discrepancy, when exposed to instance noise and/or label smoothing during training. Trained models successfully generated realistic morphologies for both neuron types, with high false positive rate reported by expert reviewers. Collectively, DCGANs offer a unique opportunity to advance the geometry of neural modeling, and, therefore, to propel our understanding of neuronal function.∗ A “stable/stabilized DCGAN”, as mentioned throughout this work, is a DCGAN which was stable throughout training.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2021 ◽  
pp. 103985622110286
Author(s):  
Tracey Wade ◽  
Jamie-Lee Pennesi ◽  
Yuan Zhou

Objective: Currently eligibility for expanded Medicare items for eating disorders (excluding anorexia nervosa) require a score ⩾ 3 on the 22-item Eating Disorder Examination-Questionnaire (EDE-Q). We compared these EDE-Q “cases” with continuous scores on a validated 7-item version of the EDE-Q (EDE-Q7) to identify an EDE-Q7 cut-off commensurate to 3 on the EDE-Q. Methods: We utilised EDE-Q scores of female university students ( N = 337) at risk of developing an eating disorder. We used a receiver operating characteristic (ROC) curve to assess the relationship between the true-positive rate (sensitivity) and the false-positive rate (1-specificity) of cases ⩾ 3. Results: The area under the curve showed outstanding discrimination of 0.94 (95% CI: .92–.97). We examined two specific cut-off points on the EDE-Q7, which included 100% and 87% of true cases, respectively. Conclusion: Given the EDE-Q cut-off for Medicare is used in conjunction with other criteria, we suggest using the more permissive EDE-Q7 cut-off (⩾2.5) to replace use of the EDE-Q cut-off (⩾3) in eligibility assessments.


2016 ◽  
Vol 24 (2) ◽  
pp. 263-272 ◽  
Author(s):  
Kosuke Imai ◽  
Kabir Khanna

In both political behavior research and voting rights litigation, turnout and vote choice for different racial groups are often inferred using aggregate election results and racial composition. Over the past several decades, many statistical methods have been proposed to address this ecological inference problem. We propose an alternative method to reduce aggregation bias by predicting individual-level ethnicity from voter registration records. Building on the existing methodological literature, we use Bayes's rule to combine the Census Bureau's Surname List with various information from geocoded voter registration records. We evaluate the performance of the proposed methodology using approximately nine million voter registration records from Florida, where self-reported ethnicity is available. We find that it is possible to reduce the false positive rate among Black and Latino voters to 6% and 3%, respectively, while maintaining the true positive rate above 80%. Moreover, we use our predictions to estimate turnout by race and find that our estimates yields substantially less amounts of bias and root mean squared error than standard ecological inference estimates. We provide open-source software to implement the proposed methodology.


Author(s):  
Yosef S. Razin ◽  
Jack Gale ◽  
Jiaojiao Fan ◽  
Jaznae’ Smith ◽  
Karen M. Feigh

This paper evaluates Banks et al.’s Human-AI Shared Mental Model theory by examining how a self-driving vehicle’s hazard assessment facilitates shared mental models. Participants were asked to affirm the vehicle’s assessment of road objects as either hazards or mistakes in real-time as behavioral and subjective measures were collected. The baseline performance of the AI was purposefully low (<50%) to examine how the human’s shared mental model might lead to inappropriate compliance. Results indicated that while the participant true positive rate was high, overall performance was reduced by the large false positive rate, indicating that participants were indeed being influenced by the Al’s faulty assessments, despite full transparency as to the ground-truth. Both performance and compliance were directly affected by frustration, mental, and even physical demands. Dispositional factors such as faith in other people’s cooperativeness and in technology companies were also significant. Thus, our findings strongly supported the theory that shared mental models play a measurable role in performance and compliance, in a complex interplay with trust.


2021 ◽  
Author(s):  
James Howard ◽  
◽  
Joe Tracey ◽  
Mike Shen ◽  
Shawn Zhang ◽  
...  

Borehole image logs are used to identify the presence and orientation of fractures, both natural and induced, found in reservoir intervals. The contrast in electrical or acoustic properties of the rock matrix and fluid-filled fractures is sufficiently large enough that sub-resolution features can be detected by these image logging tools. The resolution of these image logs is based on the design and operation of the tools, and generally is in the millimeter per pixel range. Hence the quantitative measurement of actual width remains problematic. An artificial intelligence (AI) -based workflow combines the statistical information obtained from a Machine-Learning (ML) segmentation process with a multiple-layer neural network that defines a Deep Learning process that enhances fractures in a borehole image. These new images allow for a more robust analysis of fracture widths, especially those that are sub-resolution. The images from a BHTV log were first segmented into rock and fluid-filled fractures using a ML-segmentation tool that applied multiple image processing filters that captured information to describe patterns in fracture-rock distribution based on nearest-neighbor behavior. The robust ML analysis was trained by users to identify these two components over a short interval in the well, and then the regression model-based coefficients applied to the remaining log. Based on the training, each pixel was assigned a probability value between 1.0 (being a fracture) and 0.0 (pure rock), with most of the pixels assigned one of these two values. Intermediate probabilities represented pixels on the edge of rock-fracture interface or the presence of one or more sub-resolution fractures within the rock. The probability matrix produced a map or image of the distribution of probabilities that determined whether a given pixel in the image was a fracture or partially filled with a fracture. The Deep Learning neural network was based on a Conditional Generative Adversarial Network (cGAN) approach where the probability map was first encoded and combined with a noise vector that acted as a seed for diverse feature generation. This combination was used to generate new images that represented the BHTV response. The second layer of the neural network, the adversarial or discriminator portion, determined whether the generated images were representative of the actual BHTV by comparing the generated images with actual images from the log and producing an output probability of whether it was real or fake. This probability was then used to train the generator and discriminator models that were then applied to the entire log. Several scenarios were run with different probability maps. The enhanced BHTV images brought out fractures observed in the core photos that were less obvious in the original BTHV log through enhanced continuity and improved resolution on fracture widths.


2014 ◽  
Author(s):  
Andreas Tuerk ◽  
Gregor Wiktorin ◽  
Serhat Güler

Quantification of RNA transcripts with RNA-Seq is inaccurate due to positional fragment bias, which is not represented appropriately by current statistical models of RNA-Seq data. This article introduces the Mix2(rd. "mixquare") model, which uses a mixture of probability distributions to model the transcript specific positional fragment bias. The parameters of the Mix2model can be efficiently trained with the Expectation Maximization (EM) algorithm resulting in simultaneous estimates of the transcript abundances and transcript specific positional biases. Experiments are conducted on synthetic data and the Universal Human Reference (UHR) and Brain (HBR) sample from the Microarray quality control (MAQC) data set. Comparing the correlation between qPCR and FPKM values to state-of-the-art methods Cufflinks and PennSeq we obtain an increase in R2value from 0.44 to 0.6 and from 0.34 to 0.54. In the detection of differential expression between UHR and HBR the true positive rate increases from 0.44 to 0.71 at a false positive rate of 0.1. Finally, the Mix2model is used to investigate biases present in the MAQC data. This reveals 5 dominant biases which deviate from the common assumption of a uniform fragment distribution. The Mix2software is available at http://www.lexogen.com/fileadmin/uploads/bioinfo/mix2model.tgz.


Sign in / Sign up

Export Citation Format

Share Document