control problem
Recently Published Documents





2022 ◽  
Vol 2022 ◽  
pp. 1-14
Linhong Li ◽  
Kaifan Huang ◽  
Xiaofan Yang

With the prevalence of online social networks, the potential threat of misinformation has greatly enhanced. Therefore, it is significant to study how to effectively control the spread of misinformation. Publishing the truth to the public is the most effective approach to controlling the spread of misinformation. Knowledge popularization and expert education are two complementary ways to achieve that. It has been proven that if these two ways can be combined to speed up the release of the truth, the impact caused by the spread of misinformation will be dramatically reduced. However, how to reasonably allocate resources to these two ways so as to achieve a better result at a lower cost is still an open challenge. This paper provides a theoretical guidance for designing an effective collaborative resource allocation strategy. First, a novel individual-level misinformation spread model is proposed. It well characterizes the collaborative effect of the two truth-publishing ways on the containment of misinformation spread. On this basis, the expected cost of an arbitrary collaborative strategy is evaluated. Second, an optimal control problem is formulated to find effective strategies, with the expected cost as the performance index function and with the misinformation spread model as the constraint. Third, in order to solve the optimal control problem, an optimality system that specifies the necessary conditions of an optimal solution is derived. By solving the optimality system, a candidate optimal solution can be obtained. Finally, the effectiveness of the obtained candidate optimal solution is verified by a series of numerical experiments.

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Jun Zhao ◽  
Qingliang Zeng

Although solving the robust control problem with offline manner has been studied, it is not easy to solve it using the online method, especially for uncertain systems. In this paper, a novel approach based on an online data-driven learning is suggested to address the robust control problem for uncertain systems. To this end, the robust control problem of uncertain systems is first transformed into an optimal problem of the nominal systems via selecting an appropriate value function that denotes the uncertainties, regulation, and control. Then, a data-driven learning framework is constructed, where Kronecker’s products and vectorization operations are used to reformulate the derived algebraic Riccati equation (ARE). To obtain the solution of this ARE, an adaptive learning law is designed; this helps to retain the convergence of the estimated solutions. The closed-loop system stability and convergence have been proved. Finally, simulations are given to illustrate the effectiveness of the method.

2022 ◽  
Vol 12 (1) ◽  
David Hardman ◽  
Thomas George Thuruthel ◽  
Fumiya Iida

AbstractThe ability to remotely control a free-floating object through surface flows on a fluid medium can facilitate numerous applications. Current studies on this problem have been limited to uni-directional motion control due to the challenging nature of the control problem. Analytical modelling of the object dynamics is difficult due to the high-dimensionality and mixing of the surface flows while the control problem is hard due to the nonlinear slow dynamics of the fluid medium, underactuation, and chaotic regions. This study presents a methodology for manipulation of free-floating objects using large-scale physical experimentation and recent advances in deep reinforcement learning. We demonstrate our methodology through the open-loop control of a free-floating object in water using a robotic arm. Our learned control policy is relatively quick to obtain, highly data efficient, and easily scalable to a higher-dimensional parameter space and/or experimental scenarios. Our results show the potential of data-driven approaches for solving and analyzing highly complex nonlinear control problems.

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 184
Andrey Borisov ◽  
Alexey Bosov ◽  
Gregory Miller

The paper presents an optimal control problem for the partially observable stochastic differential system driven by an external Markov jump process. The available controlled observations are indirect and corrupted by some Wiener noise. The goal is to optimize a linear function of the state (output) given a general quadratic criterion. The separation principle, verified for the system at hand, allows examination of the control problem apart from the filter optimization. The solution to the latter problem is provided by the Wonham filter. The solution to the former control problem is obtained by formulating an equivalent control problem with a linear drift/nonlinear diffusion stochastic process and with complete information. This problem, in turn, is immediately solved by the application of the dynamic programming method. The applicability of the obtained theoretical results is illustrated by a numerical example, where an optimal amplification/stabilization problem is solved for an unstable externally controlled step-wise mechanical actuator.

Sign in / Sign up

Export Citation Format

Share Document