initial conditions
Recently Published Documents


TOTAL DOCUMENTS

9975
(FIVE YEARS 3907)

H-INDEX

127
(FIVE YEARS 26)

2022 ◽  
Vol 34 (2) ◽  
pp. 1-18
Author(s):  
Lele Qin ◽  
Guojuan Zhang ◽  
Li You

Video command and dispatch systems have become essential communication safeguard measures in circumstances of emergency rescue, epidemic prevention, and control command as, data security has become especially important. After meeting the requirements of voice and video dispatch, this paper proposes an end-to-end encryption method of multimedia information that introduces a multiple protection mechanism including selective encryption and selective integrity protection. The method has a network access authentication and service encryption workflow, which implants startup authentication and key distribution into the information control signaling procedure. This method constitutes a key pool with the three-dimensional Lorenz System, the four-dimensional Cellular Neural Network (CNN) System and the four-dimensional Chen System where the key source system and initial conditions are decided by the plaintext video frame itself. Then, this method optimizes the chaotic sequences to further enhance system security.


Author(s):  
Yuchen Liao

AbstractWe study the one-dimensional discrete time totally asymmetric simple exclusion process with parallel update rules on a spatially periodic domain. A multi-point space-time joint distribution formula is obtained for general initial conditions. The formula involves contour integrals of Fredholm determinants with kernels acting on certain discrete spaces. For a class of initial conditions satisfying certain technical assumptions, we are able to derive large-time, large-period limit of the joint distribution, under the relaxation time scale $$t=O(L^{3/2})$$ t = O ( L 3 / 2 ) when the height fluctuations are critically affected by the finite geometry. The assumptions are verified for the step and flat initial conditions. As a corollary we obtain the multi-point distribution of discrete time TASEP on the whole integer lattice $${\mathbb {Z}}$$ Z by taking the period L large enough so that the finite-time distribution is not affected by the boundary. The large time limit for the multi-time distribution of discrete time TASEP on $${\mathbb {Z}}$$ Z is then obtained for the step initial condition.


2022 ◽  
Vol 5 (1) ◽  
pp. p7
Author(s):  
Hugh Ching (USA) ◽  
Chien Yi Lee (China) ◽  
Benjamin Li (Canada)

The P/E Ratio (Price/Earning) is one of the most popular concepts in stock analysis, yet its exact interpretation is lacking. Most stock investors know the P/E Ratio as a financial indicator with the useful characteristics of being relatively time-invariant. In this paper, a rigorous mathematical derivation of the P/E Ratio is presented. The derivation shows that, in addition to its assumptions, the P/E Ratio can be considered the zeroth order solution to the rate of return on investment. The commonly used concept of the Capitalization Rate (Cap Rate = Net Income / Price) in real estate investment analysis      can also be similarly derived as the zeroth order solution of the rate of return on real estate investment. This paper also derives the first order solution to the rate of return (Return = Dividend/Price + Growth) with its assumptions. Both the zeroth and the first order solutions are derived from the exact future accounting equation (Cash Return = Sum of Cash Flow + Cash from Resale). The exact equation has been used in the derivation of the exact solution of the rate of return. Empirically, as an illustration of an actual case, the rates of return are 3%, 73%, and 115% for a stock with 70% growth rate for, respectively, the zeroth order, the first order, and the exact solution to the rate of return; the stock doubled its price in 2004. This paper concludes that the zero-th, the first order, and the exact solution of the rate of return all can be derived mathematically from the same exact equation, which, thus, forms a rigorous mathematical foundation for investment analysis, and that the low order solutions have the very practical use in providing the analytically calculated initial conditions for the iterative numerical calculation for the exact solution. The solution of value belongs to recently classified Culture Level Quotient CLQ = 10 and is in the process of being updated by fuzzy logic with its range of tolerance for predicting market crashes to advance to CLQ = 2.


Author(s):  
Svetlana Ratynskaia ◽  
Ladislas Vignitchouk ◽  
Panagiotis Tolias

Abstract The design, licensing and operation of magnetic confinement fusion reactors impose various limitations on the amount of metallic dust particles residing inside the plasma chamber. In this context, predictive studies of dust production and migration constitute one of the main sources of relevant data. These are mainly conducted using dust transport codes, which rely on coupled dust-plasma and dust-wall interaction models, and require external input on the dust and droplet initial conditions. Some particularities of dust modelling in reactor-relevant conditions are analyzed with an emphasis on dust generation mechanisms relevant for disruption scenarios and on dust remobilization mechanisms relevant for ramp-up scenarios. Emerging topics such as dust production by runaway electron impact and pre-plasma remobilization of magnetic dust are also discussed.


2022 ◽  
Vol 26 (1) ◽  
pp. 197-220
Author(s):  
Emixi Sthefany Valdez ◽  
François Anctil ◽  
Maria-Helena Ramos

Abstract. This study aims to decipher the interactions of a precipitation post-processor and several other tools for uncertainty quantification implemented in a hydrometeorological forecasting chain. We make use of four hydrometeorological forecasting systems that differ by how uncertainties are estimated and propagated. They consider the following sources of uncertainty: system A, forcing, system B, forcing and initial conditions, system C, forcing and model structure, and system D, forcing, initial conditions, and model structure. For each system's configuration, we investigate the reliability and accuracy of post-processed precipitation forecasts in order to evaluate their ability to improve streamflow forecasts for up to 7 d of forecast horizon. The evaluation is carried out across 30 catchments in the province of Quebec (Canada) and over the 2011–2016 period. Results are compared using a multicriteria approach, and the analysis is performed as a function of lead time and catchment size. The results indicate that the precipitation post-processor resulted in large improvements in the quality of forecasts with regard to the raw precipitation forecasts. This was especially the case when evaluating relative bias and reliability. However, its effectiveness in terms of improving the quality of hydrological forecasts varied according to the configuration of the forecasting system, the forecast attribute, the forecast lead time, and the catchment size. The combination of the precipitation post-processor and the quantification of uncertainty from initial conditions showed the best results. When all sources of uncertainty were quantified, the contribution of the precipitation post-processor to provide better streamflow forecasts was not remarkable, and in some cases, it even deteriorated the overall performance of the hydrometeorological forecasting system. Our study provides an in-depth investigation of how improvements brought by a precipitation post-processor to the quality of the inputs to a hydrological forecasting model can be cancelled along the forecasting chain, depending on how the hydrometeorological forecasting system is configured and on how the other sources of hydrological forecasting uncertainty (initial conditions and model structure) are considered and accounted for. This has implications for the choices users might make when designing new or enhancing existing hydrometeorological ensemble forecasting systems.


Author(s):  
А.А. Моисеенко ◽  
С.М. Фёдоров

Представлен метод использования расчетных методик и моделирования магнитных полей в двухмерном пространстве для нахождения высокочастотных потерь в обмотке моточных изделий, таких как дроссель или трансформатор. Была проведена работа по анализу литературы по данной теме, а также поднят вопрос оптимизации и адаптации аналитических формул для случая использования проводников круглого сечения и намотки, имеющей неоднородное распределение слоев в окне сердечника. Был также поднят вопрос об аналитическом нахождении длины обмоточного провода намотки с различным количеством слоев и переменного количества используемых при этом витков. Для проведения автоматизации расчета с помощью формул был написан скрипт, строящий зависимость сопротивления переменному току относительно частоты, используя аналитические формулы. Была написана программа для автоматической постановки начальных условий и граничных значений параметров моделирования, процесса самого моделирования электромагнитных полей, анализа полученных данных, а также формирования массива для построения графика полученной при этом зависимости сопротивления от частоты. В данном методе используется свободно распространяемое программное обеспечение как для математических расчетов, так и моделирования электромагнитных полей. Итогом данной работы стало сравнение полученных результатов, которые показали хорошую сходимость и преемственность этапов данного метода Here we present a method for using computational methods and modeling magnetic fields in two-dimensional space to find high-frequency losses in the winding of winding products, such as a choke or transformer. We analyzed the literature on this topic, as well as the issue of optimization and adaptation of analytical formulas for the case of using round-section conductors and winding having a non-uniform distribution of layers in the core window. We discussed the analytical finding of the length of the winding wire of the winding with a different number of layers and a variable number of turns used in this case. To automate the calculation using formulas, we wrote a script that builds the dependence of the resistance to alternating current relative to the frequency using analytical formulas. In addition, we wrote a program for automatically setting the initial conditions and boundary values of the modeling parameters, the process of modeling electromagnetic fields itself, analyzing the data obtained, as well as forming an array for plotting the resulting dependence of resistance on frequency. This method uses freely distributed software for both mathematical calculations and modeling of electromagnetic fields. The result of this work was a comparison of the results obtained, which showed good convergence and continuity of the stages of this method


2022 ◽  
Vol 9 ◽  
Author(s):  
Renette Jones-Ivey ◽  
Abani Patra ◽  
Marcus Bursik

Probabilistic hazard assessments for studying overland pyroclastic flows or atmospheric ash clouds under short timelines of an evolving crisis, require using the best science available unhampered by complicated and slow manual workflows. Although deterministic mathematical models are available, in most cases, parameters and initial conditions for the equations are usually only known within a prescribed range of uncertainty. For the construction of probabilistic hazard assessments, accurate outputs and propagation of the inherent input uncertainty to quantities of interest are needed to estimate necessary probabilities based on numerous runs of the underlying deterministic model. Characterizing the uncertainty in system states due to parametric and input uncertainty, simultaneously, requires using ensemble based methods to explore the full parameter and input spaces. Complex tasks, such as running thousands of instances of a deterministic model with parameter and input uncertainty require a High Performance Computing infrastructure and skilled personnel that may not be readily available to the policy makers responsible for making informed risk mitigation decisions. For efficiency, programming tasks required for executing ensemble simulations need to run in parallel, leading to twin computational challenges of managing large amounts of data and performing CPU intensive processing. The resulting flow of work requires complex sequences of tasks, interactions, and exchanges of data, hence the automatic management of these workflows are essential. Here we discuss a computer infrastructure, methodology and tools which enable scientists and other members of the volcanology research community to develop workflows for construction of probabilistic hazard maps using remotely accessed computing through a web portal.


Sign in / Sign up

Export Citation Format

Share Document