scholarly journals The revolution of personalized psychiatry: will technology make it happen sooner?

2017 ◽  
Vol 48 (5) ◽  
pp. 705-713 ◽  
Author(s):  
G. Perna ◽  
M. Grassi ◽  
D. Caldirola ◽  
C. B. Nemeroff

Personalized medicine (PM) aims to establish a new approach in clinical decision-making, based upon a patient's individual profile in order to tailor treatment to each patient's characteristics. Although this has become a focus of the discussion also in the psychiatric field, with evidence of its high potential coming from several proof-of-concept studies, nearly no tools have been developed by now that are ready to be applied in clinical practice. In this paper, we discuss recent technological advances that can make a shift toward a clinical application of the PM paradigm. We focus specifically on those technologies that allow both the collection of massive as much as real-time data, i.e., electronic medical records and smart wearable devices, and to achieve relevant predictions using these data, i.e. the application of machine learning techniques.

2021 ◽  
Vol 11 (9) ◽  
pp. 893
Author(s):  
Francesca Bottino ◽  
Emanuela Tagliente ◽  
Luca Pasquini ◽  
Alberto Di Napoli ◽  
Martina Lucignani ◽  
...  

More than a year has passed since the report of the first case of coronavirus disease 2019 (COVID), and increasing deaths continue to occur. Minimizing the time required for resource allocation and clinical decision making, such as triage, choice of ventilation modes and admission to the intensive care unit is important. Machine learning techniques are acquiring an increasingly sought-after role in predicting the outcome of COVID patients. Particularly, the use of baseline machine learning techniques is rapidly developing in COVID mortality prediction, since a mortality prediction model could rapidly and effectively help clinical decision-making for COVID patients at imminent risk of death. Recent studies reviewed predictive models for SARS-CoV-2 diagnosis, severity, length of hospital stay, intensive care unit admission or mechanical ventilation modes outcomes; however, systematic reviews focused on prediction of COVID mortality outcome with machine learning methods are lacking in the literature. The present review looked into the studies that implemented machine learning, including deep learning, methods in COVID mortality prediction thus trying to present the existing published literature and to provide possible explanations of the best results that the studies obtained. The study also discussed challenging aspects of current studies, providing suggestions for future developments.


2016 ◽  
Vol 18 (12) ◽  
pp. 1680-1687 ◽  
Author(s):  
Ken Chang ◽  
Biqi Zhang ◽  
Xiaotao Guo ◽  
Min Zong ◽  
Rifaquat Rahman ◽  
...  

Abstract Background Bevacizumab is a humanized antibody against vascular endothelial growth factor approved for treatment of recurrent glioblastoma. There is a need to discover imaging biomarkers that can aid in the selection of patients who will likely derive the most survival benefit from bevacizumab. Methods The aim of the study was to examine if pre- and posttherapy multimodal MRI features could predict progression-free survival and overall survival (OS) for patients with recurrent glioblastoma treated with bevacizumab. The patient population included 84 patients in a training cohort and 42 patients in a testing cohort, separated based on pretherapy imaging date. Tumor volumes of interest were segmented from contrast-enhanced T1-weighted and fluid attenuated inversion recovery images and were used to derive volumetric, shape, texture, parametric, and histogram features. A total of 2293 pretherapy and 9811 posttherapy features were used to generate the model. Results Using standard radiographic assessment criteria, the hazard ratio for predicting OS was 3.38 (P < .001). The hazard ratios for pre- and posttherapy features predicting OS were 5.10 (P < .001) and 3.64 (P < .005) for the training and testing cohorts, respectively. Conclusion With the use of machine learning techniques to analyze imaging features derived from pre- and posttherapy multimodal MRI, we were able to develop a predictive model for patient OS that could potentially assist clinical decision making.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Majid Amirfakhrian ◽  
Mahboub Parhizkar

AbstractIn the next decade, machine vision technology will have an enormous impact on industrial works because of the latest technological advances in this field. These advances are so significant that the use of this technology is now essential. Machine vision is the process of using a wide range of technologies and methods in providing automated inspections in an industrial setting based on imaging, process control, and robot guidance. One of the applications of machine vision is to diagnose traffic accidents. Moreover, car vision is utilized for detecting the amount of damage to vehicles during traffic accidents. In this article, using image processing and machine learning techniques, a new method is presented to improve the accuracy of detecting damaged areas in traffic accidents. Evaluating the proposed method and comparing it with previous works showed that the proposed method is more accurate in identifying damaged areas and it has a shorter execution time.


Equine Health ◽  
2014 ◽  
Vol 2014 (16) ◽  
pp. 34-37
Author(s):  
Thilo Pfau ◽  
Andrew Fiske-Jackson ◽  
Susanne Troster

Author(s):  
Niddal Imam ◽  
Biju Issac ◽  
Seibu Mary Jacob

Twitter has changed the way people get information by allowing them to express their opinion and comments on the daily tweets. Unfortunately, due to the high popularity of Twitter, it has become very attractive to spammers. Unlike other types of spam, Twitter spam has become a serious issue in the last few years. The large number of users and the high amount of information being shared on Twitter play an important role in accelerating the spread of spam. In order to protect the users, Twitter and the research community have been developing different spam detection systems by applying different machine-learning techniques. However, a recent study showed that the current machine learning-based detection systems are not able to detect spam accurately because spam tweet characteristics vary over time. This issue is called “Twitter Spam Drift”. In this paper, a semi-supervised learning approach (SSLA) has been proposed to tackle this. The new approach uses the unlabeled data to learn the structure of the domain. Different experiments were performed on English and Arabic datasets to test and evaluate the proposed approach and the results show that the proposed SSLA can reduce the effect of Twitter spam drift and outperform the existing techniques.


2019 ◽  
Vol 892 ◽  
pp. 274-283
Author(s):  
Mohammed Ashikur Rahman ◽  
Afidalina Tumian

Now a day, clinical decision support systems (CDSS) are widely used in the cardiac care due to the complexity of the cardiac disease. The objective of this systematic literature review (SLR) is to identify the most common variables and machine learning techniques used to build machine learning-based clinical decision support system for cardiac care. This SLR adopts the Preferred Reporting Item for Systematic Review and Meta-Analysis (PRISMA) format. Out of 530 papers, only 21 papers met the inclusion criteria. Amongst the 22 most common variables are age, gender, heart rate, respiration rate, systolic blood pressure and medical information variables. In addition, our results have shown that Simplified Acute Physiology Score (SAPS), Sequential Organ Failure Assessment (SOFA) and Acute Physiology and Chronic Health Evaluation (APACHE) are some of the most common assessment scales used in CDSS for cardiac care. Logistic regression and support vector machine are the most common machine learning techniques applied in CDSS to predict mortality and other cardiac diseases like sepsis, cardiac arrest, heart failure and septic shock. These variables and assessment tools can be used to build a machine learning-based CDSS.


Energies ◽  
2020 ◽  
Vol 13 (20) ◽  
pp. 5504
Author(s):  
Hyang-A Park ◽  
Gilsung Byeon ◽  
Wanbin Son ◽  
Hyung-Chul Jo ◽  
Jongyul Kim ◽  
...  

Due to the recent development of information and communication technology (ICT), various studies using real-time data are now being conducted. The microgrid research field is also evolving to enable intelligent operation of energy management through digitalization. Problems occur when operating the actual microgrid, causing issues such as difficulty in decision making and system abnormalities. Using digital twin technology, which is one of the technologies representing the fourth industrial revolution, it is possible to overcome these problems by changing the microgrid configuration and operating algorithms of virtual space in various ways and testing them in real time. In this study, we proposed an energy storage system (ESS) operation scheduling model to be applied to virtual space when constructing a microgrid using digital twin technology. An ESS optimal charging/discharging scheduling was established to minimize electricity bills and was implemented using supervised learning techniques such as the decision tree, NARX, and MARS models instead of existing optimization techniques. NARX and decision trees are machine learning techniques. MARS is a nonparametric regression model, and its application has been increasing. Its performance was analyzed by deriving performance evaluation indicators for each model. Using the proposed model, it was found in a case study that the amount of electricity bill savings when operating the ESS is greater than that incurred in the actual ESS operation. The suitability of the model was evaluated by a comparative analysis with the optimization-based ESS charging/discharging scheduling pattern.


Sign in / Sign up

Export Citation Format

Share Document