scaling factor
Recently Published Documents


TOTAL DOCUMENTS

567
(FIVE YEARS 168)

H-INDEX

28
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Zhaoxi Sun ◽  
Mao Wang ◽  
Qiaole He ◽  
Zhirong Liu

Molecular simulations are becoming a common tool for the investigation of dynamic and thermodynamic properties of novel solvents such as ionic liquids and the more recent deep eutectic solvents. As the electrostatics derived from ab initio calculations often fail to reproduce the experimental behaviors of these functionalized solvents, a common treatment is scaling the atomic charges to improve the accord between experimental and computational results for some selected properties, e.g., the density of the liquids. Although there are many computational benchmarks on structural properties of bulk ionic liquids, the choice of the best scaling parameter remains an open question. As these liquids are designed to solvate solutes, whether the solvation thermodynamics could be correctly described is of utmost importance in practical situations. Therefore, in the current work, we provide a thermodynamic perspective of this charge scaling issue directly from solute-solvent interactions. We present a comprehensive large-scale calculation of solvation free energies via nonequilibrium fast-switching simulations for a spectrum of molecules in ionic liquids, the atomic charges of which derived from ab initio calculations are scaled to find the best scaling factor that maximizes the prediction-experiment correlation. The density-derived choice of the scaling parameter as the estimate from bulk properties is compared with the solvation-free-energy-derived one. We observed that when the scaling factor is decreased from 1.0 to 0.5, the mass density exhibits a monotonically decreasing behavior, which is caused by weaker inter-molecular interactions produced by the scaled atomic charges. However, the solvation free energies of external agents do not show consistent monotonic behaviors like the bulk property, the underlying physics of which are elucidated to be the competing electrostatic and vdW responses to the scaling-parameter variation. More intriguingly, although the recommended value for charge scaling from bulk properties falls in the neighborhood of 0.6~0.7, solvation free energies calculated at this value are not in good agreement with the experimental reference. By modestly increasing the scaling parameter (e.g., by 0.1) to avoid over-scaling of atomic charges, the solute-solvent interaction free energy approaches the reference value and the quality of calculated solvation thermodynamics approaches the hydration case. According to this phenomenon, we propose a feasible way to obtain the best scaling parameter that produces balanced solute-solvent and solvent-solvent interactions, i.e., first scanning the density-scaling-factor profile and then adding ~0.1 to that solution. We further calculate the partition coefficient or transfer free energy of solutes from water to ionic liquids to provide another thermodynamic perspective of the charge scaling benchmark. Another central result of the current work is about the widely used force fields to describe bonded and vdW terms for ionic liquids derivatives. These pre-fitted transferable parameters are evaluated and refitted in a system-specific manner to provide a detailed assessment of the reliability and accuracy of these commonly used parameters. Component-specific refitting procedures unveil that the bond-stretching term is the most problematic part of the GAFF derivatives and the angle-bending term in some cases is also not accurate enough. Astonishingly, the torsional potential defined in these pre-fitted force fields performs extremely well.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Differential evolution (DE), an important evolutionary technique, enhances its parameters such as, initialization of population, mutation, crossover etc. to resolve realistic optimization issues. This work represents a modified differential evolution algorithm by using the idea of exponential scale factor and logistic map in order to address the slow convergence rate, and to keep a very good equilibrium linking exploration and exploitation. Modification is done in two ways: (i) Initialization of population and (ii) Scaling factor.The proposed algorithm is validated with the aid of a 13 different benchmark functions taking from the literature, also the outcomes are compared along with 7 different popular state of art algorithms. Further, performance of the modified algorithm is simulated on 3 realistic engineering problems. Also compared with 8 recent optimizer techniques. Again from number of function evaluations it is clear that the proposed algorithm converses more quickly than the other existing algorithms.


2022 ◽  
Vol 17 (01) ◽  
pp. P01014
Author(s):  
E. Mirrezaei ◽  
S. Setayeshi ◽  
F. Zakeri ◽  
S. Baradaran

Abstract Ionizing radiation is extensively utilized in various applications; however, it can lead to significant harm to living systems. In this regard, the radiation absorbed dose is usually evaluated by performing biological dosimetry and physical reconstruction of exposure scenarios. But, this is costly, time-consuming, and maybe impractical for a biodosimetry lab to perform biological dosimetry. This study aimed to assess the applicability and reliability of the Geant4-DNA toolkit as a simulation approach to construct a reliable dose-response curve for biodosimetry purposes as an appropriate substitution for experimental measurements. In this matter, the total number of double-strand breaks (DSBs), due to different doses of low LET radiation qualities on DNA molecules, was calculated and converted to the values of dicentric chromosomes using a mechanistic model of cellular response. Then, the number of dicentric chromosomes induced by 200 kVp X-rays were modified by using a semi-empirical scaling factor for compensating the restriction of simulation code to consider what can happen in a real cell. Next, the trend of dicentrics for 137Cs and 60Co were calculated and modified by the above scaling factor. Finally, the dose-response curves for these gamma sources compared to several published experiments. The suggested calibration curves for 137Cs and 60Co followed a linear quadratic equation: Ydic = 0.0054 (± 0.0133) - 0.0089 (± 0.0212) × D + 0.0568 (± 0.0051) × D2 and Ydic = 0.0052 (± 0.0128) - 0.00568 (± 0.0203) × D + 0.0525 (± 0.0049) × D2 respectively. They revealed a satisfactory agreement with the experimental data reported by others. The Geant4 program developed in this work could provide an appropriate tool for predicting the dose-response (calibration) curve for biodosimetry purposes.


2022 ◽  
Vol 13 (1) ◽  
pp. 1-15
Author(s):  
Katyayani Kashyap ◽  
Sunil Pathak ◽  
Narendra Singh Yadav

Differential evolution (DE), an important evolutionary technique, enhances its parameters such as, initialization of population, mutation, crossover etc. to resolve realistic optimization issues. This work represents a modified differential evolution algorithm by using the idea of exponential scale factor and logistic map in order to address the slow convergence rate, and to keep a very good equilibrium linking exploration and exploitation. Modification is done in two ways: (i) Initialization of population and (ii) Scaling factor.The proposed algorithm is validated with the aid of a 13 different benchmark functions taking from the literature, also the outcomes are compared along with 7 different popular state of art algorithms. Further, performance of the modified algorithm is simulated on 3 realistic engineering problems. Also compared with 8 recent optimizer techniques. Again from number of function evaluations it is clear that the proposed algorithm converses more quickly than the other existing algorithms.


Author(s):  
Magdalena Szubielska ◽  
Marta Szewczyk ◽  
Wenke Möhring

AbstractThe present study examined differences in adults’ spatial-scaling abilities across three perceptual conditions: (1) visual, (2) haptic, and (3) visual and haptic. Participants were instructed to encode the position of a convex target presented in a simple map without a time limit. Immediately after encoding the map, participants were presented with a referent space and asked to place a disc at the same location from memory. All spaces were designed as tactile graphics. Positions of targets varied along the horizontal dimension. The referent space was constant in size while sizes of maps were systematically varied, resulting in three scaling factor conditions: 1:4, 1:2, 1:1. Sixty adults participated in the study (M = 21.18; SD = 1.05). One-third of them was blindfolded throughout the entire experiment (haptic condition). The second group of participants was allowed to see the graphics (visual condition); the third group were instructed to see and touch the graphics (bimodal condition). An analysis of participants’ absolute errors showed that participants produced larger errors in the haptic condition as opposed to the visual and bimodal conditions. There was also a significant interaction effect between scaling factor and perceptual condition. In the visual and bimodal conditions, results showed a linear increase in errors with higher scaling factors (which may suggest that adults adopted mental transformation strategies during the spatial scaling process), whereas, in the haptic condition, this relation was quadratic. Findings imply that adults’ spatial-scaling performance decreases when visual information is not available.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Wenyun Gao ◽  
Xiaoyun Li ◽  
Sheng Dai ◽  
Xinghui Yin ◽  
Stanley Ebhohimhen Abhadiomhen

The low-rank representation (LRR) method has recently gained enormous popularity due to its robust approach in solving the subspace segmentation problem, particularly those concerning corrupted data. In this paper, the recursive sample scaling low-rank representation (RSS-LRR) method is proposed. The advantage of RSS-LRR over traditional LRR is that a cosine scaling factor is further introduced, which imposes a penalty on each sample to minimize noise and outlier influence better. Specifically, the cosine scaling factor is a similarity measure learned to extract each sample’s relationship with the low-rank representation’s principal components in the feature space. In order words, the smaller the angle between an individual data sample and the low-rank representation’s principal components, the more likely it is that the data sample is clean. Thus, the proposed method can then effectively obtain a good low-rank representation influenced mainly by clean data. Several experiments are performed with varying levels of corruption on ORL, CMU PIE, COIL20, COIL100, and LFW in order to evaluate RSS-LRR’s effectiveness over state-of-the-art low-rank methods. The experimental results show that RSS-LRR consistently performs better than the compared methods in image clustering and classification tasks.


2021 ◽  
pp. 1-24
Author(s):  
L. Massaro ◽  
J. Adam ◽  
E. Jonade ◽  
Y. Yamada

Abstract In this study, we present a new granular rock-analogue material (GRAM) with a dynamic scaling suitable for the simulation of fault and fracture processes in analogue experiments. Dynamically scaled experiments allow the direct comparison of geometrical, kinematical and mechanical processes between model and nature. The geometrical scaling factor defines the model resolution, which depends on the density and cohesive strength ratios of model material and natural rocks. Granular materials such as quartz sands are ideal for the simulation of upper crustal deformation processes as a result of similar nonlinear deformation behaviour of granular flow and brittle rock deformation. We compared the geometrical scaling factor of common analogue materials applied in tectonic models, and identified a gap in model resolution corresponding to the outcrop and structural scale (1–100 m). The proposed GRAM is composed of quartz sand and hemihydrate powder and is suitable to form cohesive aggregates capable of deforming by tensile and shear failure under variable stress conditions. Based on dynamical shear tests, GRAM is characterized by a similar stress–strain curve as dry quartz sand, has a cohesive strength of 7.88 kPa and an average density of 1.36 g cm−3. The derived geometrical scaling factor is 1 cm in model = 10.65 m in nature. For a large-scale test, GRAM material was applied in strike-slip analogue experiments. Early results demonstrate the potential of GRAM to simulate fault and fracture processes, and their interaction in fault zones and damage zones during different stages of fault evolution in dynamically scaled analogue experiments.


Axioms ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 5
Author(s):  
Amir Sabbagh Molahosseini

Scaling is one of the complex operations in the Residue Number System (RNS). This operation is necessary for RNS-based implementations of deep neural networks (DNNs) to prevent overflow. However, the state-of-the-art RNS scalers for special moduli sets consider the 2k modulo as the scaling factor, which results in a high-precision output with a high area and delay. Therefore, low-precision scaling based on multi-moduli scaling factors should be used to improve performance. However, low-precision scaling for numbers less than the scale factor results in zero output, which makes the subsequent operation result faulty. This paper first presents the formulation and hardware architecture of low-precision RNS scaling for four-moduli sets using new Chinese remainder theorem 2 (New CRT-II) based on a two-moduli scaling factor. Next, the low-precision scaler circuits are reused to achieve a high-precision scaler with the minimum overhead. Therefore, the proposed scaler can detect the zero output after low-precision scaling and then transform low-precision scaled residues to high precision to prevent zero output when the input number is not zero.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 43
Author(s):  
Song-Pei Ye ◽  
Yi-Hua Liu ◽  
Chun-Yu Liu ◽  
Kun-Che Ho ◽  
Yi-Feng Luo

In conventional adaptive variable step size (VSS) maximum power point tracking (MPPT) algorithms, a scaling factor is utilized to determine the required perturbation step. However, the performance of the adaptive VSS MPPT algorithm is essentially decided by the choice of scaling factor. In this paper, a neural network assisted variable step size (VSS) incremental conductance (IncCond) MPPT method is proposed. The proposed method utilizes a neural network to obtain an optimal scaling factor that should be used in current irradiance level for the VSS IncCond MPPT method. Only two operating points on the characteristic curve are needed to acquire the optimal scaling factor. Hence, expensive irradiance and temperature sensors are not required. By adopting a proper scaling factor, the performance of the conventional VSS IncCond method can be improved, especially under rapid varying irradiance conditions. To validate the studied algorithm, a 400 W prototyping circuit is built and experiments are carried out accordingly. Comparing with perturb and observe (P&O), α-P&O, golden section and conventional VSS IncCond MPPT methods, the proposed method can improve the tracking loss by 95.58%, 42.51%, 93.66%, and 66.14% under EN50530 testing condition, respectively.


Sign in / Sign up

Export Citation Format

Share Document