scholarly journals Chronic Pain Diagnosis Using Machine Learning, Questionnaires, and QST: A Sensitivity Experiment

Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 958
Author(s):  
Alex Novaes Santana ◽  
Charles Novaes de Santana ◽  
Pedro Montoya

In the last decade, machine learning has been widely used in different fields, especially because of its capacity to work with complex data. With the support of machine learning techniques, different studies have been using data-driven approaches to better understand some syndromes like mild cognitive impairment, Alzheimer’s disease, schizophrenia, and chronic pain. Chronic pain is a complex disease that can recurrently be misdiagnosed due to its comorbidities with other syndromes with which it shares symptoms. Within that context, several studies have been suggesting different machine learning algorithms to classify or predict chronic pain conditions. Those algorithms were fed with a diversity of data types, from self-report data based on questionnaires to the most advanced brain imaging techniques. In this study, we assessed the sensitivity of different algorithms and datasets classifying chronic pain syndromes. Together with this assessment, we highlighted important methodological steps that should be taken into account when an experiment using machine learning is conducted. The best results were obtained by ensemble-based algorithms and the dataset containing the greatest diversity of information, resulting in area under the receiver operating curve (AUC) values of around 0.85. In addition, the performance of the algorithms is strongly related to the hyper-parameters. Thus, a good strategy for hyper-parameter optimization should be used to extract the most from the algorithm. These findings support the notion that machine learning can be a powerful tool to better understand chronic pain conditions.

2018 ◽  
Vol 7 (2.8) ◽  
pp. 684 ◽  
Author(s):  
V V. Ramalingam ◽  
Ayantan Dandapath ◽  
M Karthik Raja

Heart related diseases or Cardiovascular Diseases (CVDs) are the main reason for a huge number of death in the world over the last few decades and has emerged as the most life-threatening disease, not only in India but in the whole world. So, there is a need of reliable, accurate and feasible system to diagnose such diseases in time for proper treatment. Machine Learning algorithms and techniques have been applied to various medical datasets to automate the analysis of large and complex data. Many researchers, in recent times, have been using several machine learning techniques to help the health care industry and the professionals in the diagnosis of heart related diseases. This paper presents a survey of various models based on such algorithms and techniques andanalyze their performance. Models based on supervised learning algorithms such as Support Vector Machines (SVM), K-Nearest Neighbour (KNN), NaïveBayes, Decision Trees (DT), Random Forest (RF) and ensemble models are found very popular among the researchers.


2021 ◽  
Author(s):  
Razvan V. Ababei ◽  
Matthew O. A. Ellis ◽  
Ian T. Vidamour ◽  
Dhilan S. Devadasan ◽  
Dan A. Allwood ◽  
...  

Abstract Machine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1d collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Razvan V. Ababei ◽  
Matthew O. A. Ellis ◽  
Ian T. Vidamour ◽  
Dhilan S. Devadasan ◽  
Dan A. Allwood ◽  
...  

AbstractMachine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1D collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits; and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks.


Author(s):  
Dr. E. Baraneetharan

Machine Learning is capable of providing real-time solutions that maximize the utilization of resources in the network thereby increasing the lifetime of the network. It is able to process automatically without being externally programmed thus making the process more easy, efficient, cost-effective, and reliable. ML algorithms can handle complex data more quickly and accurately. Machine Learning is used to enhance the ability of the Wireless Sensor Network environment. Wireless Sensor Networks (WSN) is a combination of several networks and it is decentralized and distributed in nature. WSN consists of sensor nodes and sinks nodes which have a property of self-organizing and self-healing. WSN is used in other applications, such as biodiversity and ecosystem protection, surveillance, climate change tracking, and other military applications.Now-a-days, a huge development is seen in WSNs due to the advancement of electronics and wireless communication technologies, several drawbacks like low computational capacity, small memory, and limited energy resources infrastructure needs physical vulnerability to require source measures where privacy plays a key role.WSN is used to monitor the dynamic environments and to adapt to such situation sensor networks need Machine Learning techniques to avoid unnecessary redesign. Machine learning techniques survey for WSNs provide a wide range of applications in which security is given top priority. To secure data from attackers the WSNs system should be able to delete the instruction if any hackers/attackers are trying to steal data.


2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


2020 ◽  
Vol 21 ◽  
Author(s):  
Sukanya Panja ◽  
Sarra Rahem ◽  
Cassandra J. Chu ◽  
Antonina Mitrofanova

Background: In recent years, the availability of high throughput technologies, establishment of large molecular patient data repositories, and advancement in computing power and storage have allowed elucidation of complex mechanisms implicated in therapeutic response in cancer patients. The breadth and depth of such data, alongside experimental noise and missing values, requires a sophisticated human-machine interaction that would allow effective learning from complex data and accurate forecasting of future outcomes, ideally embedded in the core of machine learning design. Objective: In this review, we will discuss machine learning techniques utilized for modeling of treatment response in cancer, including Random Forests, support vector machines, neural networks, and linear and logistic regression. We will overview their mathematical foundations and discuss their limitations and alternative approaches all in light of their application to therapeutic response modeling in cancer. Conclusion: We hypothesize that the increase in the number of patient profiles and potential temporal monitoring of patient data will define even more complex techniques, such as deep learning and causal analysis, as central players in therapeutic response modeling.


Author(s):  
Aaishwarya Sanjay Bajaj ◽  
Usha Chouhan

Background: This paper endeavors to identify an expedient approach for the detection of the brain tumor in MRI images. The detection of tumor is based on i) review of the machine learning approach for the identification of brain tumor and ii) review of a suitable approach for brain tumor detection. Discussion: This review focuses on different imaging techniques such as X-rays, PET, CT- Scan, and MRI. This survey identifies a different approach with better accuracy for tumor detection. This further includes the image processing method. In most applications, machine learning shows better performance than manual segmentation of the brain tumors from MRI images as it is a difficult and time-consuming task. For fast and better computational results, radiology used a different approach with MRI, CT-scan, X-ray, and PET. Furthermore, summarizing the literature, this paper also provides a critical evaluation of the surveyed literature which reveals new facets of research. Conclusion: The problem faced by the researchers during brain tumor detection techniques and machine learning applications for clinical settings have also been discussed.


2021 ◽  
pp. 1-17
Author(s):  
Ahmed Al-Tarawneh ◽  
Ja’afer Al-Saraireh

Twitter is one of the most popular platforms used to share and post ideas. Hackers and anonymous attackers use these platforms maliciously, and their behavior can be used to predict the risk of future attacks, by gathering and classifying hackers’ tweets using machine-learning techniques. Previous approaches for detecting infected tweets are based on human efforts or text analysis, thus they are limited to capturing the hidden text between tweet lines. The main aim of this research paper is to enhance the efficiency of hacker detection for the Twitter platform using the complex networks technique with adapted machine learning algorithms. This work presents a methodology that collects a list of users with their followers who are sharing their posts that have similar interests from a hackers’ community on Twitter. The list is built based on a set of suggested keywords that are the commonly used terms by hackers in their tweets. After that, a complex network is generated for all users to find relations among them in terms of network centrality, closeness, and betweenness. After extracting these values, a dataset of the most influential users in the hacker community is assembled. Subsequently, tweets belonging to users in the extracted dataset are gathered and classified into positive and negative classes. The output of this process is utilized with a machine learning process by applying different algorithms. This research build and investigate an accurate dataset containing real users who belong to a hackers’ community. Correctly, classified instances were measured for accuracy using the average values of K-nearest neighbor, Naive Bayes, Random Tree, and the support vector machine techniques, demonstrating about 90% and 88% accuracy for cross-validation and percentage split respectively. Consequently, the proposed network cyber Twitter model is able to detect hackers, and determine if tweets pose a risk to future institutions and individuals to provide early warning of possible attacks.


Materials ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1089
Author(s):  
Sung-Hee Kim ◽  
Chanyoung Jeong

This study aims to demonstrate the feasibility of applying eight machine learning algorithms to predict the classification of the surface characteristics of titanium oxide (TiO2) nanostructures with different anodization processes. We produced a total of 100 samples, and we assessed changes in TiO2 nanostructures’ thicknesses by performing anodization. We successfully grew TiO2 films with different thicknesses by one-step anodization in ethylene glycol containing NH4F and H2O at applied voltage differences ranging from 10 V to 100 V at various anodization durations. We found that the thicknesses of TiO2 nanostructures are dependent on anodization voltages under time differences. Therefore, we tested the feasibility of applying machine learning algorithms to predict the deformation of TiO2. As the characteristics of TiO2 changed based on the different experimental conditions, we classified its surface pore structure into two categories and four groups. For the classification based on granularity, we assessed layer creation, roughness, pore creation, and pore height. We applied eight machine learning techniques to predict classification for binary and multiclass classification. For binary classification, random forest and gradient boosting algorithm had relatively high performance. However, all eight algorithms had scores higher than 0.93, which signifies high prediction on estimating the presence of pore. In contrast, decision tree and three ensemble methods had a relatively higher performance for multiclass classification, with an accuracy rate greater than 0.79. The weakest algorithm used was k-nearest neighbors for both binary and multiclass classifications. We believe that these results show that we can apply machine learning techniques to predict surface quality improvement, leading to smart manufacturing technology to better control color appearance, super-hydrophobicity, super-hydrophilicity or batter efficiency.


Sign in / Sign up

Export Citation Format

Share Document