logarithmic scale
Recently Published Documents


TOTAL DOCUMENTS

302
(FIVE YEARS 102)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Costas D. Kalfountzos ◽  
George S.E. Bikakis ◽  
Efstathios E. Theotokoglou

Purpose The purpose of this paper is to study the deterministic elastic buckling behavior of cylindrical fiber–metal laminate panels subjected to uniaxial compressive loading and the investigation of GLAss fiber-REinforced aluminum laminate (GLARE) panels using probabilistic finite element method (FEM) analysis. Design/methodology/approach The FEM in combination with the eigenvalue buckling analysis is used for the construction of buckling coefficient–curvature parameter diagrams of seven fiber–metal laminate grades, three glass-fiber composites and monolithic 2024-T3 aluminum. The influences of uncertainties concerning material properties and laminate dimensions on the buckling load are studied with sensitivity analyses. Findings It is found that aluminum has a stronger impact on the buckling behavior of the fiber–metal laminate panels than their constituent uni-directional or woven composites. For the classical simply supported boundary conditions, it is found that there is an approximately linear relation between the buckling coefficient and the curvature parameter when the diagrams are plotted in double logarithmic scale. The probabilistic calculations demonstrate that there is a considerable probability to overestimate the buckling load of GLARE panels with deterministic calculations. Originality/value In this study, the deterministic and probabilistic buckling response of fiber metal laminate panels is investigated. It is shown that realistic structural uncertainties could substantially affect the buckling strength of aerospace components.


2021 ◽  
Author(s):  
Predrag Jelenković ◽  
Jané Kondev ◽  
Lishibanya Mohapatra ◽  
Petar Momčilović

Single-class closed queueing networks, consisting of infinite-server and single-server queues with exponential service times and probabilistic routing, admit product-from solutions. Such solutions, although seemingly tractable, are difficult to characterize explicitly for practically relevant problems due to the exponential combinatorial complexity of its normalization constant (partition function). In “A Probabilistic Approach to Growth Networks,” Jelenković, Kondev, Mohapatra, and Momčilović develop a novel methodology, based on a probabilistic representation of product-form solutions and large-deviations concentration inequalities, which identifies distinct operating regimes and yields explicit expressions for the marginal distributions of queue lengths. From a methodological perspective, a fundamental feature of the proposed approach is that it provides exact results for order-one probabilities, even though the analysis involves large-deviations rate functions, which characterize only vanishing probabilities on a logarithmic scale.


BMC Medicine ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Loukia M. Spineli ◽  
Chrysostomos Kalyvas ◽  
Katerina Papadimitropoulou

Abstract Background To investigate the prevalence of robust conclusions in systematic reviews addressing missing (participant) outcome data via a novel framework of sensitivity analyses and examine the agreement with the current sensitivity analysis standards. Methods We performed an empirical study on systematic reviews with two or more interventions. Pairwise meta-analyses (PMA) and network meta-analyses (NMA) were identified from empirical studies on the reporting and handling of missing outcome data in systematic reviews. PMAs with at least three studies and NMAs with at least three interventions on one primary outcome were considered eligible. We applied Bayesian methods to obtain the summary effect estimates whilst modelling missing outcome data under the missing-at-random assumption and different assumptions about the missingness mechanism in the compared interventions. The odds ratio in the logarithmic scale was considered for the binary outcomes and the standardised mean difference for the continuous outcomes. We calculated the proportion of primary analyses with robust and frail conclusions, quantified by our proposed metric, the robustness index (RI), and current sensitivity analysis standards. Cohen’s kappa statistic was used to measure the agreement between the conclusions derived by the RI and the current sensitivity analysis standards. Results One hundred eight PMAs and 34 NMAs were considered. When studies with a substantial number of missing outcome data dominated the analyses, the number of frail conclusions increased. The RI indicated that 59% of the analyses failed to demonstrate robustness compared to 39% when the current sensitivity analysis standards were employed. Comparing the RI with the current sensitivity analysis standards revealed that two in five analyses yielded contradictory conclusions concerning the robustness of the primary analysis results. Conclusions Compared with the current sensitivity analysis standards, the RI offers an explicit definition of similar results and does not unduly rely on statistical significance. Hence, it may safeguard against possible spurious conclusions regarding the robustness of the primary analysis results.


Author(s):  
Vladimir Vasin ◽  
◽  
Fabrice Toussaint ◽  

In the paper, the method suggested in [5] for solving the pressure–rate deconvo- lution problem was modified with implementation for the synthetic (quasi-real) oil and gas data. Modification of the method is based on using the additional a priori information on the function v(t) = tg(t) in the logarithmic scale. On the initial time interval, the function is concave and its final interval is monotone. Here, g(t) is the solution of the basis equation (1). To take into account these properties in the Tikhonov algorithm, the penalty function method is used. It allowed one to increase the precision of the numerical solution and to improve quality of identification of the wellbore–reservoir system. Numerical experiments are provided.


2021 ◽  
Vol 24 (3) ◽  
pp. 80-91
Author(s):  
Aleksey V. Kostin

The article proposes a refined method for calculating the width of the conductors of printed circuit boards on a metal base for the onboard devices of spacecraft, depending on the current flowing. The constructed refined mathematical model of the process of conductive heat exchange between the conductors and the metal base is described. The results of calculations of various, most common, locations of layers of printed circuit boards are presented. An analysis was carried out and a refined methodology was developed based on the results obtained. It allows you to easily (without complicate calculations) calculate the necessary values of the width of conductors. This technique is based on graphical methods, but allows you to perform technical calculations with sufficient accuracy. Accuracy is achieved by using special formulas that simplify the determination of the value of a physical quantity on a logarithmic scale. The disadvantages of the proposed method are indicated.


Author(s):  
A.K. Dorosh ◽  
N.M. Bilko ◽  
D.I. Bilko

The rheological properties of the gel-like material, the monomer of which is a crosslinked and modified 2-propenamide of acrylic acid, were determined by relaxation rheometry methods. The values of its elastic modulus and modulus of losses and complex viscosity depending on: deforming stress and its frequency are determined; relative deformation; temperature in the range (20-100) ° C and the regularities of these dependences are noted. It is established that: 1) the dependence of the modulus of elasticity (G'); modulus of loss (G'') and complex viscosity from: relative deformation; voltage; temperature; frequencies indicate that in the linear scale they change according to nonlinear dependencies, and in the transition to the logarithmic scale contain plateau-like areas; 2) analytical dependences of the above parameters on stress, strain rate and temperature are complex and difficult to establish; 3) in the range (20-80) ° C and relative deformations (10-100)% hydrogel has a virtually unchanged value of the modulus (G ') ten times greater than the modulus (G' '), whichdetermines the uniqueness of its rheological and biophysical properties ;  4) in the region (20-80) ° C hydrogel in terms of modulus of elasticity and tangent of the angle of loss is close to a completely elastic body; 5) when the frequency of the deforming voltage is more than 15.8 Hz and the relative deformation ≥100%, the gel is brittlely deformed; while the modulus of its elasticity decreases abruptly and the modulus of losses increases rapidly with increasing frequency of the deforming stress. 6) the dependence of the elastic-viscosity characteristics of the samples washed and unwashed in saline gel in the temperature range (20-80) ° C differ little and indicate that the equilibrium structure of the hydrogel 2-propenamide acrylic acid belongs to the typical colloidal dispersed structure of gelatinous substances.


Author(s):  
Mikhail Korpusenko ◽  
Farshid Manoocheri ◽  
Olli-Pekka Kilpi ◽  
Aapo Varpula ◽  
Markku Kainlauri ◽  
...  

Abstract We investigate the Predictable Quantum Efficient Detector (PQED) in the visible and near-infrared wavelength range. The PQED consists of two n-type induced junction photodiodes with Al2O3 entrance window. Measurements are performed at the wavelengths of 488 nm and 785 nm with incident power levels ranging from 100 µW to 1000 µW. A new way of presenting the normalized photocurrents on a logarithmic scale as a function of bias voltage reveals two distinct negative slope regions and allows direct comparison of charge carrier losses at different wavelengths. The comparison indicates mechanisms that can be understood on the basis of different penetration depths at different wavelengths (0.77 μm at 488 nm and 10.2 μm at 785 nm). The difference in the penetration depths leads also to larger difference in the charge-carrier losses at low bias voltages than at high voltages due to the voltage dependence of the depletion region.


2021 ◽  
Author(s):  
Rui Cao ◽  
John H Bladon ◽  
Stephen J Charczynski ◽  
Michael Hasselmo ◽  
Marc Howard

The Weber-Fechner law proposes that our perceived sensory input increases with physical input on a logarithmic scale. Hippocampal "time cells" carry a record of recent experience by firing sequentially during a circumscribed period of time after a triggering stimulus. Different cells have "time fields" at different delays up to at least tens of seconds. Past studies suggest that time cells represent a compressed timeline by demonstrating that fewer time cells fire late in the delay and their time fields are wider. This paper asks whether the compression of time cells obeys the Weber-Fechner Law. Time cells were studied with a hierarchical Bayesian model that simultaneously accounts for the firing pattern at the trial level, cell level, and population level. This procedure allows separate estimates of the within-trial receptive field width and the across-trial variability. The analysis at the trial level suggests the time cells represent an internally coherent timeline as a group. Furthermore, even after isolating across-trial variability, time field width increases linearly with delay. Finally, we find that the time cell population is distributed evenly on a logarithmic time scale. Together, these findings provide strong quantitative evidence that the internal neural temporal representation is logarithmically compressed and obeys a neural instantiation of the Weber- Fechner Law.


2021 ◽  
Vol 10 (5) ◽  
pp. 2845-2856
Author(s):  
Abhishek Kumar ◽  
Vishal Dutt ◽  
Vicente García-Díaz ◽  
Sushil Kumar Narang

Sentiment analysis through textual data mining is an indispensable system used to extract the contextual social information from the texts submitted by the intended users. Now days, world wide web is playing a vital source of textual content being shared in different communities by the people sharing their own sentiments through the websites or web blogs. Sentiment analysis has become a vital field of study since based on the extracted expressions, individuals or the businesses can access or update their reviews and take significant decisions. Sentimental mining is typically used to classify these reviews depending on its assessment as whether these reviews come out to be neutral, positive or negative. In our study, we have boosted feature selection technique with strong feature normalization for classifying the sentiments into negative, positive or neutral. Afterwards, support vector machine (SVM) classifier powered with radial basis kernel with adjusted hyper plane parameters, was employed to categorize reviews. Grid search with cross validation as well as logarithmic scale were employed for optimal values of hyper parameters. The classification results of this proposed system provides optimal results when compared to other state of art classification methods.


Sign in / Sign up

Export Citation Format

Share Document