An experimental test of a model for repeated Ca2+ spikes in osteoblastic cells

1991 ◽  
Vol 69 (7) ◽  
pp. 433-441 ◽  
Author(s):  
Jack Ferrier ◽  
Angela Kesthely ◽  
Eva Lagan ◽  
Conrad Richter

A model for cytosolic Ca2+ spikes is presented that incorporates continual influx of Ca2+, uptake into an intracellular compartment, and Ca2+-induced Ca2+ release from the compartment. Two versions are used. In one, release is controlled by explicit thresholds, while in the other, release is a continuous function of cytosolic and compartmental [Ca2+]. Some model predictions are as follows. Starting with low Ca2+ influx and no spikes: (1) induction of spiking when Ca2+ influx is increased. Starting with spikes: (2) increase in magnitude and decrease in frequency when influx is reduced; (3) inhibition of spiking if influx is greatly reduced; (4) decrease in the root-mean-square value when influx is increased; and (5) elimination of spiking if influx is greatly increased. Since there is good evidence that hyperpolarizing spikes reflect cytosolic Ca2+ spikes, we used electrophysiological measurements to test the model. Each model prediction was confirmed by experiments in which Ca2+ influx was manipulated. However, the original spike activity tended to return within 5–30 min, indicating a cellular resetting process.Key words: calcium, electrophysiology, mathematical modelling.

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 717
Author(s):  
Mariia Nazarkevych ◽  
Natalia Kryvinska ◽  
Yaroslav Voznyi

This article presents a new method of image filtering based on a new kind of image processing transformation, particularly the wavelet-Ateb–Gabor transformation, that is a wider basis for Gabor functions. Ateb functions are symmetric functions. The developed type of filtering makes it possible to perform image transformation and to obtain better biometric image recognition results than traditional filters allow. These results are possible due to the construction of various forms and sizes of the curves of the developed functions. Further, the wavelet transformation of Gabor filtering is investigated, and the time spent by the system on the operation is substantiated. The filtration is based on the images taken from NIST Special Database 302, that is publicly available. The reliability of the proposed method of wavelet-Ateb–Gabor filtering is proved by calculating and comparing the values of peak signal-to-noise ratio (PSNR) and mean square error (MSE) between two biometric images, one of which is filtered by the developed filtration method, and the other by the Gabor filter. The time characteristics of this filtering process are studied as well.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3311
Author(s):  
Riccardo Ballarini ◽  
Marco Ghislieri ◽  
Marco Knaflitz ◽  
Valentina Agostini

In motor control studies, the 90% thresholding of variance accounted for (VAF) is the classical way of selecting the number of muscle synergies expressed during a motor task. However, the adoption of an arbitrary cut-off has evident drawbacks. The aim of this work is to describe and validate an algorithm for choosing the optimal number of muscle synergies (ChoOSyn), which can overcome the limitations of VAF-based methods. The proposed algorithm is built considering the following principles: (1) muscle synergies should be highly consistent during the various motor task epochs (i.e., remaining stable in time), (2) muscle synergies should constitute a base with low intra-level similarity (i.e., to obtain information-rich synergies, avoiding redundancy). The algorithm performances were evaluated against traditional approaches (threshold-VAF at 90% and 95%, elbow-VAF and plateau-VAF), using both a simulated dataset and a real dataset of 20 subjects. The performance evaluation was carried out by analyzing muscle synergies extracted from surface electromyographic (sEMG) signals collected during walking tasks lasting 5 min. On the simulated dataset, ChoOSyn showed comparable performances compared to VAF-based methods, while, in the real dataset, it clearly outperformed the other methods, in terms of the fraction of correct classifications, mean error (ME), and root mean square error (RMSE). The proposed approach may be beneficial to standardize the selection of the number of muscle synergies between different research laboratories, independent of arbitrary thresholds.


1995 ◽  
Vol 74 (6) ◽  
pp. 2665-2684 ◽  
Author(s):  
Y. Kondoh ◽  
Y. Hasegawa ◽  
J. Okuma ◽  
F. Takahashi

1. A computational model accounting for motion detection in the fly was examined by comparing responses in motion-sensitive horizontal system (HS) and centrifugal horizontal (CH) cells in the fly's lobula plate with a computer simulation implemented on a motion detector of the correlation type, the Reichardt detector. First-order (linear) and second-order (quadratic nonlinear) Wiener kernels from intracellularly recorded responses to moving patterns were computed by cross correlating with the time-dependent position of the stimulus, and were used to characterize response to motion in those cells. 2. When the fly was stimulated with moving vertical stripes with a spatial wavelength of 5-40 degrees, the HS and CH cells showed basically a biphasic first-order kernel, having an initial depolarization that was followed by hyperpolarization. The linear model matched well with the actual response, with a mean square error of 27% at best, indicating that the linear component comprises a major part of responses in these cells. The second-order nonlinearity was insignificant. When stimulated at a spatial wavelength of 2.5 degrees, the first-order kernel showed a significant decrease in amplitude, and was initially hyperpolarized; the second-order kernel was, on the other hand, well defined, having two hyperpolarizing valleys on the diagonal with two off-diagonal peaks. 3. The blockage of inhibitory interactions in the visual system by application of 10-4 M picrotoxin, however, evoked a nonlinear response that could be decomposed into the sum of the first-order (linear) and second-order (quadratic nonlinear) terms with a mean square error of 30-50%. The first-order term, comprising 10-20% of the picrotoxin-evoked response, is characterized by a differentiating first-order kernel. It thus codes the velocity of motion. The second-order term, comprising 30-40% of the response, is defined by a second-order kernel with two depolarizing peaks on the diagonal and two off-diagonal hyperpolarizing valleys, suggesting that the nonlinear component represents the power of motion. 4. Responses in the Reichardt detector, consisting of two mirror-image subunits with spatiotemporal low-pass filters followed by a multiplication stage, were computer simulated and then analyzed by the Wiener kernel method. The simulated responses were linearly related to the pattern velocity (with a mean square error of 13% for the linear model) and matched well with the observed responses in the HS and CH cells. After the multiplication stage, the linear component comprised 15-25% and the quadratic nonlinear component comprised 60-70% of the simulated response, which was similar to the picrotoxin-induced response in the HS cells. The quadratic nonlinear components were balanced between the right and left sides, and could be eliminated completely by their contralateral counterpart via a subtraction process. On the other hand, the linear component on one side was the mirror image of that on the other side, as expected from the kernel configurations. 5. These results suggest that responses to motion in the HS and CH cells depend on the multiplication process in which both the velocity and power components of motion are computed, and that a putative subtraction process selectively eliminates the nonlinear components but amplifies the linear component. The nonlinear component is directionally insensitive because of its quadratic non-linearity. Therefore the subtraction process allows the subsequent cells integrating motion (such as the HS cells) to tune the direction of motion more sharply.


1. It is widely felt that any method of rejecting observations with large deviations from the mean is open to some suspicion. Suppose that by some criterion, such as Peirce’s and Chauvenet’s, we decide to reject observations with deviations greater than 4 σ, where σ is the standard error, computed from the standard deviation by the usual rule; then we reject an observation deviating by 4·5 σ, and thereby alter the mean by about 4·5 σ/ n , where n is the number of observations, and at the same time we reduce the computed standard error. This may lead to the rejection of another observation deviating from the original mean by less than 4 σ, and if the process is repeated the mean may be shifted so much as to lead to doubt as to whether it is really sufficiently representative of the observations. In many cases, where we suspect that some abnormal cause has affected a fraction of the observations, there is a legitimate doubt as to whether it has affected a particular observation. Suppose that we have 50 observations. Then there is an even chance, according to the normal law, of a deviation exceeding 2·33 σ. But a deviation of 3 σ or more is not impossible, and if we make a mistake in rejecting it the mean of the remainder is not the most probable value. On the other hand, an observation deviating by only 2 σ may be affected by an abnormal cause of error, and then we should err in retaining it, even though no existing rule will instruct us to reject such an observation. It seems clear that the probability that a given observation has been affected by an abnormal cause of error is a continuous function of the deviation; it is never certain or impossible that it has been so affected, and a process that completely rejects certain observations, while retaining with full weight others with comparable deviations, possibly in the opposite direction, is unsatisfactory in principle.


2012 ◽  
Vol 602-604 ◽  
pp. 776-780
Author(s):  
Zhi Qiang Li ◽  
Mei Li ◽  
Wei Jia Fan

Poly(3-hydroxybutyrate-co-4-hydroxybutyrate)copolymer [P(3HB-co-4HB)] is a kind of biodegradable high molecular polymer produced by bioaccumulation. Because of the good biodegradability and biocompatibility, P(3HB-co-4HB)s have attracted wide attention . At first, the intrinsic viscosity[η] in good solvent of P(3HB-co-4HB) s with varying contents of 4HB was investigated in different temperature. Second, observed the changes of crystallization gathered state caused by the varying contents of 4HB by polarizing microscope. The results show that to the P(3HB-co-4HB)s in same molecular weight, the intrinsic viscosity[η] in good solvent barely changes when the mole fractions of 4HB increase. On the other hand, the mean square end to end distances[0] of macromolecular flexible chains increase with the mole fractions of 4HB. At the same time, the states of aggregation change from spherulites to dendrites. In this investigation, we discuss the reasons of the differences in depth.


1995 ◽  
Vol 10 (10) ◽  
pp. 845-852 ◽  
Author(s):  
M. CONSOLI ◽  
Z. HIOKI

We perform a detailed comparison of the present LEP data with the one-loop standard model predictions. It is pointed out that for mt = 174 GeV the "bulk" of the data prefers a rather large value of the Higgs mass in the range of 500–1000 GeV, in agreement with the indications from the W mass. On the other hand, to accommodate a light Higgs it is crucial to include the more problematic data for the τ FB asymmetry. We discuss further improvements on the data which are required to obtain a firm conclusion.


2019 ◽  
Vol 11 (13) ◽  
pp. 1598 ◽  
Author(s):  
Hua Su ◽  
Xin Yang ◽  
Wenfang Lu ◽  
Xiao-Hai Yan

Retrieving multi-temporal and large-scale thermohaline structure information of the interior of the global ocean based on surface satellite observations is important for understanding the complex and multidimensional dynamic processes within the ocean. This study proposes a new ensemble learning algorithm, extreme gradient boosting (XGBoost), for retrieving subsurface thermohaline anomalies, including the subsurface temperature anomaly (STA) and the subsurface salinity anomaly (SSA), in the upper 2000 m of the global ocean. The model combines surface satellite observations and in situ Argo data for estimation, and uses root-mean-square error (RMSE), normalized root-mean-square error (NRMSE), and R2 as accuracy evaluations. The results show that the proposed XGBoost model can easily retrieve subsurface thermohaline anomalies and outperforms the gradient boosting decision tree (GBDT) model. The XGBoost model had good performance with average R2 values of 0.69 and 0.54, and average NRMSE values of 0.035 and 0.042, for STA and SSA estimations, respectively. The thermohaline anomaly patterns presented obvious seasonal variation signals in the upper layers (the upper 500 m); however, these signals became weaker as the depth increased. The model performance fluctuated, with the best performance in October (autumn) for both STA and SSA, and the lowest accuracy occurred in January (winter) for STA and April (spring) for SSA. The STA estimation error mainly occurred in the El Niño-Southern Oscillation (ENSO) region in the upper ocean and the boundary of the ocean basins in the deeper ocean; meanwhile, the SSA estimation error presented a relatively even distribution. The wind speed anomalies, including the u and v components, contributed more to the XGBoost model for both STA and SSA estimations than the other surface parameters; however, its importance at deeper layers decreased and the contributions of the other parameters increased. This study provides an effective remote sensing technique for subsurface thermohaline estimations and further promotes long-term remote sensing reconstructions of internal ocean parameters.


Energies ◽  
2020 ◽  
Vol 13 (23) ◽  
pp. 6378
Author(s):  
S. M. Mahfuz Alam ◽  
Mohd. Hasan Ali

This work proposes two non-linear and one linear equation-based system for residential load forecasting considering heating degree days, cooling degree days, occupancy, and day type, which are applicable to any residential building with small sets of smart meter data. The coefficients of the proposed nonlinear and linear equations are tuned by particle swarm optimization (PSO) and the multiple linear regression method, respectively. For the purpose of comparison, a subtractive clustering based adaptive neuro fuzzy inference system (ANFIS), random forests, gradient boosting trees, and long-term short memory neural network, conventional and modified support vector regression methods were considered. Simulations have been performed in MATLAB environment, and all the methods were tested with randomly chosen 30 days data of a residential building in Memphis City for energy consumption prediction. The absolute average error, root mean square error, and mean average percentage errors are tabulated and considered as performance indices. The efficacy of the proposed systems for residential load forecasting over the other systems have been validated by both simulation results and performance indices, which indicate that the proposed equation-based systems have the lowest absolute average errors, root mean square errors, and mean average percentage errors compared to the other methods. In addition, the proposed systems can be easily practically implemented.


1957 ◽  
Vol 103 (432) ◽  
pp. 656-660 ◽  
Author(s):  
H. J. Eysenck ◽  
H. Holland ◽  
D. S. Trouton

In the first paper of this series, it was pointed out that one of the reasons why McDougall's theory of drug action and personality was not accepted at all widely was connected with the fact that he failed to provide an objective, experimental test which could be used to diagnose extraversion-introversion, and to assess drug effects. This argument is not entirely correct; McDougall did in fact suggest one such test, namely the rate of fluctuation of so-called reversible perspective figures. Many varieties of these are known, and have been used experimentally; the Necker cube, the staircase, the vase-face, and the windmill patterns being probably the best known. In all of these, there is an ambiguity in the drawing which makes it possible to perceive two distinct patterns in the stimulus; on prolonged inspection these patterns alternate, and it is the rate of alternation, signalled verbally or by suitable mechanical arrangement, which constitutes the score on this test. It is known that different types of pattern give reasonably reliable scores, and also that rates of alternation on different patterns correlate quite highly together, thus demonstrating that one and the same tendency is being measured. That this tendency is of central rather than peripheral character is indicated by the fact that changes in the rate of reversal due to fatigue and other causes can be transferred from one eye to the other.


Sign in / Sign up

Export Citation Format

Share Document