computational implementation
Recently Published Documents


TOTAL DOCUMENTS

291
(FIVE YEARS 78)

H-INDEX

24
(FIVE YEARS 2)

Author(s):  
Mehmet Barlo ◽  
Nuh Aygün Dalkıran

2021 ◽  
Vol 5 (4) ◽  
pp. 70-78
Author(s):  
Lev Raskin ◽  
Larysa Sukhomlyn ◽  
Dmytro Sagaidachny ◽  
Roman Korsun

Known technologies for analyzing Markov systems use a well-operating mathematical apparatus based on the computational implementation of the fundamental Markov property. Herewith the resulting systems of linear algebraic equations are easily solved numerically. Moreover, when solving lots of practical problems, this numerical solution is insufficient. For instance, both in problems of structural and parametric synthesis of systems, as well as in control problems. These problems require to obtain analytical relations describing the dependences of probability values of states of the analyzed system with the numerical values of its parameters. The complexity of the analytical solution of the related systems of linear algebraic equations increases rapidly along with the increase in the system dimensionality. This very phenomenon manifests itself especially demonstratively when analyzing multi-threaded queuing systems.  Accordingly, the objective of this paper is to develop an effective computational method for obtaining analytical relations that allow to analyze high-dimensional Markov systems. To analyze such systems this paper provides for a decomposition method based on the idea of phase enlargement of system states. The proposed and substantiated method allows to obtain analytical relations for calculating the distribution of Markov system states.  The method can be effectively applied to solve problems of analysis and management in high-dimensional Markov systems. An example has been considered


Author(s):  
Aleksandr Markovskiy ◽  
Olga Rusanova ◽  
Al-Mrayat Ghassan Abdel Jalil Halil ◽  
Olga Kot

The new approach to accelerate the computational implementation of the basic for a wide range of cryptographic data protection mechanisms operation of exponentiation on Galois Fields have been proposed. The approach is based on the use of a specific property of a polynomial square and the Montgomery reduction.  A new method of squaring reduces the amount of computation by 25% compared to the known ones. Based on the developed method, the exponentiation on Galois Fields procedure has been modified, which allows to reduce the amount of calculations by 20%.


2021 ◽  
Vol 24 ◽  
pp. 39-44
Author(s):  
Olha Chala ◽  
Yevgeniy Bodyanskiy

The paper proposes a 2D-hybrid system of computational intelligence, which is based on the generalized neo-fuzzy neuron. The system is characterised by high approximate abilities, simple computational implementation, and high learning speed. The characteristic property of the proposed system is that on its input the signal is fed not in the traditional vector form, but in the image-matrix form. Such an approach allows getting rid of additional convolution-pooling layers that are used in deep neural networks as an encoder. The main elements of the proposed system are a fuzzified multidimensional bilinear model, additional softmax layer, and multidimensional generalized neo-fuzzy neuron tuning with cross-entropy criterion. Compared to deep neural systems, the proposed matrix neo-fuzzy system contains gradually fewer tuning parameters – synaptic weights. The usage of the time-optimal algorithm for tuning synaptic weights allows implementing learning in an online mode.


Author(s):  
Peng Wei

Medical imaging, including X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a critical role in early detection, diagnosis, and treatment response prediction of cancer. To ease radiologists’ task and help with challenging cases, computer-aided diagnosis has been developing rapidly in the past decade, pioneered by radiomics early on, and more recently, driven by deep learning. In this mini-review, I use breast cancer as an example and review how medical imaging and its quantitative modeling, including radiomics and deep learning, have improved the early detection and treatment response prediction of breast cancer. I also outline what radiomics and deep learning share in common and how they differ in terms of modeling procedure, sample size requirement, and computational implementation. Finally, I discuss the challenges and efforts entailed to integrate deep learning models and software in clinical practice.


2021 ◽  
Vol 61 (12) ◽  
pp. 2054-2067
Author(s):  
V. I. Vasil’ev ◽  
M. V. Vasil’eva ◽  
D. Ya. Nikiforov ◽  
N. I. Sidnyaev ◽  
S. P. Stepanov ◽  
...  

2021 ◽  
pp. 096228022110432
Author(s):  
Ricardo R Petterle ◽  
Henrique A Laureano ◽  
Guilherme P da Silva ◽  
Wagner H Bonat

We propose a multivariate regression model to handle multiple continuous bounded outcomes. We adopted the maximum likelihood approach for parameter estimation and inference. The model is specified by the product of univariate probability distributions and the correlation between the response variables is obtained through the correlation matrix of the random intercepts. For modeling continuous bounded variables on the interval [Formula: see text] we considered the beta and unit gamma distributions. The main advantage of the proposed model is that we can easily combine different marginal distributions for the response variable vector. The computational implementation is performed using Template Model Builder, which combines the Laplace approximation with automatic differentiation. Therefore, the proposed approach allows us to estimate the model parameters quickly and efficiently. We conducted a simulation study to evaluate the computational implementation and the properties of the maximum likelihood estimators under different scenarios. Moreover, we investigate the impact of distribution misspecification in the proposed model. Our model was motivated by a data set with multiple continuous bounded outcomes, which refer to the body fat percentage measured at five regions of the body. Simulation studies and data analysis showed that the proposed model provides a general and rich framework to deal with multiple continuous bounded outcomes.


2021 ◽  
Author(s):  
Andreas Angourakis ◽  
Jonas Alcaina-Mateos ◽  
Marco Madella ◽  
Debora Zurro

The domestication of plants and the origin of agricultural societies has been the focus of much theoretical discussion on why, how, when, and where these happened. The 'when' and 'where' have been substantially addressed by bioarchaeology, thanks to advances in methodology and the broadening of the geographical and chronological scope of evidence. However, the 'why' and 'how' have lagged behind, holding on to relatively old models with limited explanatory power. Armed with the evidence now available, we can return to theory by revisiting the mechanisms allegedly involved, disentangling their connection to the diversity of trajectories, and identifying the weight and role of the parameters involved. We present the Human-Plant Coevolution (HPC) model, which represents the dynamics of coevolution between a human and a plant population. The model consists of an ecological positive feedback system (mutualism), which can be reinforced by positive evolutionary feedback (coevolution). The model formulation is the result of wiring together relatively simple simulation models of population ecology and evolution, through a computational implementation in R.  The HPC model captures a variety of potential scenarios, though which conditions are linked to the degree and timing of population change and the intensity of selective pressures. Our results confirm that the possible trajectories leading to neolithisation are diverse and involve multiple factors. However, simulations also show how some of those factors are entangled, what are their effects on human and plant populations under different conditions, and what might be the main causes fostering agriculture and domestication.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2164
Author(s):  
Héctor J. Gómez ◽  
Diego I. Gallardo ◽  
Karol I. Santoro

In this paper, we present an extension of the truncated positive normal (TPN) distribution to model positive data with a high kurtosis. The new model is defined as the quotient between two random variables: the TPN distribution (numerator) and the power of a standard uniform distribution (denominator). The resulting model has greater kurtosis than the TPN distribution. We studied some properties of the distribution, such as moments, asymmetry, and kurtosis. Parameter estimation is based on the moments method, and maximum likelihood estimation uses the expectation-maximization algorithm. We performed some simulation studies to assess the recovery parameters and illustrate the model with a real data application related to body weight. The computational implementation of this work was included in the tpn package of the R software.


Sign in / Sign up

Export Citation Format

Share Document