Transformer machine learning language model for auto-alignment of long-term and short-term plans in construction

2021 ◽  
Vol 132 ◽  
pp. 103929
Author(s):  
Fouad Amer ◽  
Yoonhwa Jung ◽  
Mani Golparvar-Fard
2021 ◽  
Author(s):  
Yongmin Cho ◽  
Rachael A Jonas-Closs ◽  
Lev Y Yampolsky ◽  
Marc W Kirschner ◽  
Leonid Peshkin

We present a novel platform for testing the effect of interventions on life- and health-span of a short-lived semi transparent freshwater organism, sensitive to drugs with complex behavior and physiology - the planktonic crustacean Daphnia magna. Within this platform, dozens of complex behavioural features of both routine motion and response to stimuli are continuously accurately quantified for large homogeneous cohorts via an automated phenotyping pipeline. We build predictive machine learning models calibrated using chronological age and extrapolate onto phenotypic age. We further apply the model to estimate the phenotypic age under pharmacological perturbation. Our platform provides a scalable framework for drug screening and characterization in both life-long and instant assays as illustrated using long term dose response profile of metformin and short term assay of such well-studied substances as caffeine and alcohol.


2021 ◽  
Author(s):  
Rahel Vortmeyer-Kley ◽  
Pascal Nieters ◽  
Gordon Pipa

<p>Ecological systems typically can exhibit various states ranging from extinction to coexistence of different species in oscillatory states. The switch from one state to another is called bifurcation. All these behaviours of a specific system are hidden in a set of describing differential equations (DE) depending on different parametrisations. To model such a system as DE requires full knowledge of all possible interactions of the system components. In practise, modellers can end up with terms in the DE that do not fully describe the interactions or in the worst case with missing terms.</p><p>The framework of universal differential equations (UDE) for scientific machine learning (SciML) [1] allows to reconstruct the incomplete or missing term from an idea of the DE and a short term timeseries of the system and make long term predictions of the system’s behaviour. However, the approach in [1] has difficulties to reconstruct the incomplete or missing term in systems with bifurcations. We developed a trajectory-based loss metric for UDE and SciML to tackle the problem and tested it successfully on a system mimicking algal blooms in the ocean.</p><p>[1] Rackauckas, Christopher, et al. "Universal differential equations for scientific machine learning." arXiv preprint arXiv:2001.04385 (2020).</p>


Author(s):  
Prashanth Gurunath Shivakumar ◽  
Haoqi Li ◽  
Kevin Knight ◽  
Panayiotis Georgiou

AbstractAutomatic speech recognition (ASR) systems often make unrecoverable errors due to subsystem pruning (acoustic, language and pronunciation models); for example, pruning words due to acoustics using short-term context, prior to rescoring with long-term context based on linguistics. In this work, we model ASR as a phrase-based noisy transformation channel and propose an error correction system that can learn from the aggregate errors of all the independent modules constituting the ASR and attempt to invert those. The proposed system can exploit long-term context using a neural network language model and can better choose between existing ASR output possibilities as well as re-introduce previously pruned or unseen (Out-Of-Vocabulary) phrases. It provides corrections under poorly performing ASR conditions without degrading any accurate transcriptions; such corrections are greater on top of out-of-domain and mismatched data ASR. Our system consistently provides improvements over the baseline ASR, even when baseline is further optimized through Recurrent Neural Network (RNN) language model rescoring. This demonstrates that any ASR improvements can be exploited independently and that our proposed system can potentially still provide benefits on highly optimized ASR. Finally, we present an extensive analysis of the type of errors corrected by our system.


2020 ◽  
pp. 49-57
Author(s):  
IURI ANANIASHVILI ◽  
LEVAN GAPRINDASHVILI

. In this article we present forecasts of the spread of COVID-19 virus, obtained by econometric and machine learning methods. Furthermore, by employing modelling method, we estimate effectiveness of preventive measures implemented by the government. Each of the models discussed in this article is modelling different characteristics of the COVID-19 epidemic’s trajectory: peak and end date, number of daily infections over different forecasting horizons, total number of infection cases. All these provide quite clear picture to the interested reader of the future threats posed by COVID-19. In terms of existing models and data, our research indicates that phenomenological models do well in forecasting the trend, duration and total infections of the COVID- 19 epidemic, but make serious mistakes in forecasting the number of daily infections. Machine learning models, deliver more accurate short –term forecast of daily infections, but due to data limitations, they struggle to make long-term forecasts. Compartmental models are the best choice for modelling the measures implemented by the government for preventing the spread of COVID-19 and determining optimal level of restrictions. These models show that until achieving herd immunity (i.e. without any epidemiological or government implemented measures), approximate number of people infected with COVID-19 would be 3 million, but due to preventive measures, expected total number of infections has reduced to several thousand (1555-3189) people. This unequivocally indicates the effectiveness of the preventive measures.


10.29007/mbb7 ◽  
2020 ◽  
Author(s):  
Maher Selim ◽  
Ryan Zhou ◽  
Wenying Feng ◽  
Omar Alam

Many statistical and machine learning models for prediction make use of historical data as an input and produce single or small numbers of output values. To forecast over many timesteps, it is necessary to run the program recursively. This leads to a compounding of errors, which has adverse effects on accuracy for long forecast periods. In this paper, we show this can be mitigated through the addition of generating features which can have an “anchoring” effect on recurrent forecasts, limiting the amount of compounded error in the long term. This is studied experimentally on a benchmark energy dataset using two machine learning models LSTM and XGBoost. Prediction accuracy over differing forecast lengths is compared using the forecasting MAPE. It is found that for LSTM model the accuracy of short term energy forecasting by using a past energy consumption value as a feature is higher than the accuracy when not using past values as a feature. The opposite behavior takes place for the long term energy forecasting. For the XGBoost model, the accuracy for both short and long term energy forecasting is higher when not using past values as a feature.


Sign in / Sign up

Export Citation Format

Share Document