scholarly journals Computational Model for Behavior Shaping as an Adaptive Health Intervention Strategy

2017 ◽  
Author(s):  
Vincent Berardi

Background: Adaptive behavioral interventions that automatically adjust in real-time to participants’ changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. Purpose: JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. Methods: A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Results: Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared to a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. Conclusions: The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.

2018 ◽  
Vol 8 (2) ◽  
pp. 183-194 ◽  
Author(s):  
Vincent Berardi ◽  
Ricardo Carretero-González ◽  
Neil E Klepeis ◽  
Sahar Ghanipoor Machiani ◽  
Arash Jahangiri ◽  
...  

2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Keitaro Ohno ◽  
Yusaku Ohta ◽  
Satoshi Kawamoto ◽  
Satoshi Abe ◽  
Ryota Hino ◽  
...  

AbstractRapid estimation of the coseismic fault model for medium-to-large-sized earthquakes is key for disaster response. To estimate the coseismic fault model for large earthquakes, the Geospatial Information Authority of Japan and Tohoku University have jointly developed a real-time GEONET analysis system for rapid deformation monitoring (REGARD). REGARD can estimate the single rectangular fault model and slip distribution along the assumed plate interface. The single rectangular fault model is useful as a first-order approximation of a medium-to-large earthquake. However, in its estimation, it is difficult to obtain accurate results for model parameters due to the strong effect of initial values. To solve this problem, this study proposes a new method to estimate the coseismic fault model and model uncertainties in real time based on the Bayesian inversion approach using the Markov Chain Monte Carlo (MCMC) method. The MCMC approach is computationally expensive and hyperparameters should be defined in advance via trial and error. The sampling efficiency was improved using a parallel tempering method, and an automatic definition method for hyperparameters was developed for real-time use. The calculation time was within 30 s for 1 × 106 samples using a typical single LINUX server, which can implement real-time analysis, similar to REGARD. The reliability of the developed method was evaluated using data from recent earthquakes (2016 Kumamoto and 2019 Yamagata-Oki earthquakes). Simulations of the earthquakes in the Sea of Japan were also conducted exhaustively. The results showed an advantage over the maximum likelihood approach with a priori information, which has initial value dependence in nonlinear problems. In terms of application to data with a small signal-to-noise ratio, the results suggest the possibility of using several conjugate fault models. There is a tradeoff between the fault area and slip amount, especially for offshore earthquakes, which means that quantification of the uncertainty enables us to evaluate the reliability of the fault model estimation results in real time.


2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


2021 ◽  
Vol 11 (15) ◽  
pp. 6701
Author(s):  
Yuta Sueki ◽  
Yoshiyuki Noda

This paper discusses a real-time flow-rate estimation method for a tilting-ladle-type automatic pouring machine used in the casting industry. In most pouring machines, molten metal is poured into a mold by tilting the ladle. Precise pouring is required to improve productivity and ensure a safe pouring process. To achieve precise pouring, it is important to control the flow rate of the liquid outflow from the ladle. However, due to the high temperature of molten metal, directly measuring the flow rate to devise flow-rate feedback control is difficult. To solve this problem, specific flow-rate estimation methods have been developed. In the previous study by present authors, a simplified flow-rate estimation method was proposed, in which Kalman filters were decentralized to motor systems and the pouring process for implementing into the industrial controller of an automatic pouring machine used a complicatedly shaped ladle. The effectiveness of this flow rate estimation was verified in the experiment with the ideal condition. In the present study, the appropriateness of the real-time flow-rate estimation by decentralization of Kalman filters is verified by comparing it with two other types of existing real-time flow-rate estimations, i.e., time derivatives of the weight of the outflow liquid measured by the load cell and the liquid volume in the ladle measured by a visible camera. We especially confirmed the estimation errors of the candidate real-time flow-rate estimations in the experiments with the uncertainty of the model parameters. These flow-rate estimation methods were applied to a laboratory-type automatic pouring machine to verify their performance.


Author(s):  
Xiangxue Zhao ◽  
Shapour Azarm ◽  
Balakumar Balachandran

Online prediction of dynamical system behavior based on a combination of simulation data and sensor measurement data has numerous applications. Examples include predicting safe flight configurations, forecasting storms and wildfire spread, estimating railway track and pipeline health conditions. In such applications, high-fidelity simulations may be used to accurately predict a system’s dynamical behavior offline (“non-real time”). However, due to the computational expense, these simulations have limited usage for online (“real-time”) prediction of a system’s behavior. To remedy this, one possible approach is to allocate a significant portion of the computational effort to obtain data through offline simulations. The obtained offline data can then be combined with online sensor measurements for online estimation of the system’s behavior with comparable accuracy as the off-line, high-fidelity simulation. The main contribution of this paper is in the construction of a fast data-driven spatiotemporal prediction framework that can be used to estimate general parametric dynamical system behavior. This is achieved through three steps. First, high-order singular value decomposition is applied to map high-dimensional offline simulation datasets into a subspace. Second, Gaussian processes are constructed to approximate model parameters in the subspace. Finally, reduced-order particle filtering is used to assimilate sparsely located sensor data to further improve the prediction. The effectiveness of the proposed approach is demonstrated through a case study. In this case study, aeroelastic response data obtained for an aircraft through simulations is integrated with measurement data obtained from a few sparsely located sensors. Through this case study, the authors show that along with dynamic enhancement of the state estimates, one can also realize a reduction in uncertainty of the estimates.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Gergely Takács ◽  
Tomáš Polóni ◽  
Boris Rohal’-Ilkiv

This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.


2012 ◽  
Vol 12 (12) ◽  
pp. 3719-3732 ◽  
Author(s):  
L. Mediero ◽  
L. Garrote ◽  
A. Chavez-Jimenez

Abstract. Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.


Sign in / Sign up

Export Citation Format

Share Document