scholarly journals Machine learning enabled identification and real-time prediction of living plants’ stress using terahertz waves

2022 ◽  
Author(s):  
Adnan Zahid ◽  
Kia Dashtipour ◽  
Hasan T. Abbas ◽  
Ismail Ben Mabrouk ◽  
Muath Al-Hasan ◽  
...  
2019 ◽  
Vol 34 (5) ◽  
pp. 1437-1451 ◽  
Author(s):  
Amy McGovern ◽  
Christopher D. Karstens ◽  
Travis Smith ◽  
Ryan Lagerquist

Abstract Real-time prediction of storm longevity is a critical challenge for National Weather Service (NWS) forecasters. These predictions can guide forecasters when they issue warnings and implicitly inform them about the potential severity of a storm. This paper presents a machine-learning (ML) system that was used for real-time prediction of storm longevity in the Probabilistic Hazard Information (PHI) tool, making it a Research-to-Operations (R2O) project. Currently, PHI provides forecasters with real-time storm variables and severity predictions from the ProbSevere system, but these predictions do not include storm longevity. We specifically designed our system to be tested in PHI during the 2016 and 2017 Hazardous Weather Testbed (HWT) experiments, which are a quasi-operational naturalistic environment. We considered three ML methods that have proven in prior work to be strong predictors for many weather prediction tasks: elastic nets, random forests, and gradient-boosted regression trees. We present experiments comparing the three ML methods with different types of input data, discuss trade-offs between forecast quality and requirements for real-time deployment, and present both subjective (human-based) and objective evaluation of real-time deployment in the HWT. Results demonstrate that the ML system has lower error than human forecasters, which suggests that it could be used to guide future storm-based warnings, enabling forecasters to focus on other aspects of the warning system.


2019 ◽  
Author(s):  
Mina Chookhachizadeh Moghadam ◽  
Ehsan Masoumi ◽  
Nader Bagherzadeh ◽  
Davinder Ramsingh ◽  
Guann-Pyng Li ◽  
...  

AbstractPurposePredicting hypotension well in advance provides physicians with enough time to respond with proper therapeutic measures. However, the real-time prediction of hypotension with high positive predictive value (PPV) is a challenge due to the dynamic changes in patients’ physiological status under the drug administration which is limiting the amount of useful data available for the algorithm.MethodsTo mimic real-time monitoring, we developed a machine learning algorithm that uses most of the available data points from patients’ record to train and test the algorithm. The algorithm predicts hypotension up to 30 minutes in advance based on only 5 minutes of patient’s physiological history. A novel evaluation method is proposed to assess the algorithm performance as a function of time at every timestamp within 30 minutes prior to hypotension. This evaluation approach provides statistical tools to find the best possible prediction window.ResultsDuring 181,000 minutes of monitoring of about 400 patients, the algorithm demonstrated 94% accuracy, 85% sensitivity and 96% specificity in predicting hypotension within 30 minutes of the events. A high PPV of 81% obtained and the algorithm predicted 80% of the events 25 minutes prior to their onsets. It was shown that choosing a classification threshold that maximizes the F1 score during the training phase contributes to a high PPV and sensitivity.ConclusionThis study reveals the promising potential of the machine learning algorithms in real-time prediction of hypotensive events in ICU setting based on short-term physiological history.


Author(s):  
Bochun Wang ◽  
Xuanyu Yi ◽  
Jiandong Gao ◽  
Yanru Li ◽  
Wen Xu ◽  
...  

Author(s):  
K. H. Hellton ◽  
M. Tveten ◽  
M. Stakkeland ◽  
S. Engebretsen ◽  
O. Haug ◽  
...  

2019 ◽  
Vol 68 (12) ◽  
pp. 4756-4764 ◽  
Author(s):  
Ke Huang ◽  
Xinqiao Zhang ◽  
Naghmeh Karimi

2016 ◽  
Vol 40 (5) ◽  
pp. 573-581 ◽  
Author(s):  
Ann L Edwards ◽  
Michael R Dawson ◽  
Jacqueline S Hebert ◽  
Craig Sherstan ◽  
Richard S Sutton ◽  
...  

Background: Myoelectric prostheses currently used by amputees can be difficult to control. Machine learning, and in particular learned predictions about user intent, could help to reduce the time and cognitive load required by amputees while operating their prosthetic device. Objectives: The goal of this study was to compare two switching-based methods of controlling a myoelectric arm: non-adaptive (or conventional) control and adaptive control (involving real-time prediction learning). Study design: Case series study. Methods: We compared non-adaptive and adaptive control in two different experiments. In the first, one amputee and one non-amputee subject controlled a robotic arm to perform a simple task; in the second, three able-bodied subjects controlled a robotic arm to perform a more complex task. For both tasks, we calculated the mean time and total number of switches between robotic arm functions over three trials. Results: Adaptive control significantly decreased the number of switches and total switching time for both tasks compared with the conventional control method. Conclusion: Real-time prediction learning was successfully used to improve the control interface of a myoelectric robotic arm during uninterrupted use by an amputee subject and able-bodied subjects. Clinical relevance Adaptive control using real-time prediction learning has the potential to help decrease both the time and the cognitive load required by amputees in real-world functional situations when using myoelectric prostheses.


2020 ◽  
Author(s):  
Yuanyuan Peng ◽  
Xinjian Chen ◽  
Yibiao Rong ◽  
Chi Pui Pang ◽  
Xinjian Chen ◽  
...  

BACKGROUND Advanced prediction of the daily incidence of COVID-19 can aid policy making on the prevention of disease spread, which can profoundly affect people's livelihood. In previous studies, predictions were investigated for single or several countries and territories. OBJECTIVE We aimed to develop models that can be applied for real-time prediction of COVID-19 activity in all individual countries and territories worldwide. METHODS Data of the previous daily incidence and infoveillance data (search volume data via Google Trends) from 215 individual countries and territories were collected. A random forest regression algorithm was used to train models to predict the daily new confirmed cases 7 days ahead. Several methods were used to optimize the models, including clustering the countries and territories, selecting features according to the importance scores, performing multiple-step forecasting, and upgrading the models at regular intervals. The performance of the models was assessed using the mean absolute error (MAE), root mean square error (RMSE), Pearson correlation coefficient, and Spearman correlation coefficient. RESULTS Our models can accurately predict the daily new confirmed cases of COVID-19 in most countries and territories. Of the 215 countries and territories under study, 198 (92.1%) had MAEs <10 and 187 (87.0%) had Pearson correlation coefficients >0.8. For the 215 countries and territories, the mean MAE was 5.42 (range 0.26-15.32), the mean RMSE was 9.27 (range 1.81-24.40), the mean Pearson correlation coefficient was 0.89 (range 0.08-0.99), and the mean Spearman correlation coefficient was 0.84 (range 0.2-1.00). CONCLUSIONS By integrating previous incidence and Google Trends data, our machine learning algorithm was able to predict the incidence of COVID-19 in most individual countries and territories accurately 7 days ahead.


Sign in / Sign up

Export Citation Format

Share Document