linear interpolation
Recently Published Documents


TOTAL DOCUMENTS

1125
(FIVE YEARS 290)

H-INDEX

38
(FIVE YEARS 3)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 196
Author(s):  
Zhenshan Zhu ◽  
Zhimin Weng ◽  
Hailin Zheng

Microgrid with hydrogen storage is an effective way to integrate renewable energy and reduce carbon emissions. This paper proposes an optimal operation method for a microgrid with hydrogen storage. The electrolyzer efficiency characteristic model is established based on the linear interpolation method. The optimal operation model of microgrid is incorporated with the electrolyzer efficiency characteristic model. The sequential decision-making problem of the optimal operation of microgrid is solved by a deep deterministic policy gradient algorithm. Simulation results show that the proposed method can reduce about 5% of the operation cost of the microgrid compared with traditional algorithms and has a certain generalization capability.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 191
Author(s):  
Daniel R. Prado ◽  
Jesús A. López-Fernández ◽  
Manuel Arrebola

In this work, a simple, efficient and accurate database in the form of a lookup table to use in reflectarray design and direct layout optimization is presented. The database uses N-linear interpolation internally to estimate the reflection coefficients at coordinates that are not stored within it. The speed and accuracy of this approach were measured against the use of the full-wave technique based on local periodicity to populate the database. In addition, it was also compared with a machine learning technique, namely, support vector machines applied to regression in the same conditions, to elucidate the advantages and disadvantages of each one of these techniques. The results obtained from the application to the layout design, analysis and crosspolar optimization of a very large reflectarray for space applications show that, despite using a simple N-linear interpolation, the database offers sufficient accuracy, while considerably accelerating the overall design process as long as it is conveniently populated.


Author(s):  
Mostafa Dastorani ◽  
Vahid Safarianzengir ◽  
Bromand Salahi

Introduction: The present study investigated one of these types of disease (skin cancer) and its relationship with climatic parameters. The aim of this study was to investigate the relationship between climate change and skin cancer in Ardabil province. Materials and Methods: This descriptive correlational study was conducted to investigate the effect of six climatic parameters (frost, sunny hours, minimum mean humidity, maximum absolute temperature, minimum absolute temperature, and mean temperature) on skin cancer in Ardabil province in a 3-year statistical period (2012-2014). The data were analyzed using the Spearman correlation relationship in SPSS version 24 software, also Minitab version 16 software was used for linear interpolation. Results: According to the findings, the highest correlation (more than 95%) of skin cancer in three cities of Parsabad, Khalkhal, and Ardabil with the climatic parameter was related to minimum absolute temperature. However, in Khalkhal station in three years of study, sunny hours had the highest correlation and the lowest correlation was related to glacial climate parameter in all four cities. It can be said that the factors of sunny hours and maximum temperature have an effect on the incidence of skin cancer, and the minimum absolute temperature increases the exacerbation of this type of disease. Conclusion: According to the results of statistical correlation and the effects of climatic parameters on skin cancer, it can be concluded that climate parameters are one of the effective factors in skin cancer.


Author(s):  
S. K. Foo ◽  
R. P. Cubbidge ◽  
R. Heitmar

Abstract Purpose The aims of this paper were to examine focal and diffuse visual field loss in terms of threshold agreement between the widely used SITA Standard Humphrey Field Analyser (HFA) threshold algorithm with the SPARK Precision algorithm (Oculus Twinfield 2). Methods A total of 39 treated glaucoma patients (34 primary open angle and 5 primary angle closure glaucoma) and 31 cataract patients without glaucoma were tested in succession with the Oculus Twinfield 2 (Oculus Optikgeräte GmbH, Wetzlar, Germany) using the SPARK Precision algorithm and with the HFA 3 (Carl Zeiss Meditec, Dublin, CA) using the 30–2 SITA Standard algorithm. Results SPARK Precision required around half the testing time of SITA Standard. There was a good correlation between the MS of the two threshold algorithms but MD and PSD were significantly less severe with SPARK Precision in both glaucoma (focal field loss) and cataract (diffuse field loss) groups (p < 0.001). There was poor agreement for all global indices (MS, MD and PSD) between the two algorithms and there was a significant proportional bias of MD in the glaucoma group and PSD in both glaucoma and cataract groups. The pointwise sensitivity analysis yielded higher threshold estimates in SPARK Precision than in SITA Standard in the nasal field. Classification of glaucoma severity using AGIS was significantly lower with SPARK Precision compared to SITA Standard (p < 0.001). Conclusion SITA renders deeper defects than SPARK. Compared to the SITA Standard threshold algorithm, SPARK Precision cannot quantify early glaucomatous field loss. This may be due to the mathematical linear interpolation of threshold sensitivity or deeper scotomas due to the plateau effect caused by the reduced dynamic range of the Twinfield 2 perimeter. Although not of clinical significance in early glaucoma, the plateau effect may hinder the long-term follow-up of patients during disease progression.


2021 ◽  
Author(s):  
Eva Prakash ◽  
Avanti Shrikumar ◽  
Anshul Kundaje

Deep neural networks and support vector machines have been shown to accurately predict genomewide signals of regulatory activity from raw DNA sequences. These models are appealing in part because they can learn predictive DNA sequence features without prior assumptions. Several methods such as in-silico mutagenesis, GradCAM, DeepLIFT, Integrated Gradients and GkmExplain have been developed to reveal these learned features. However, the behavior of these methods on regulatory genomic data remains an area of active research. Although prior work has benchmarked these methods on simulated datasets with known ground-truth motifs, these simulations employed highly simplified regulatory logic that is not representative of the genome. In this work, we propose a novel pipeline for designing simulated data that comes closer to modeling the complexity of regulatory genomic DNA. We apply the pipeline to build simulated datasets based on publicly-available chromatin accessibility experiments and use these datasets to benchmark different interpretation methods based on their ability to identify ground-truth motifs. We find that a GradCAM-based method, which was reported to perform well on a more simplified dataset, does not do well on this dataset (particularly when using an architecture with shorter convolutional kernels in the first layer), and we theoretically show that this is expected based on the nature of regulatory genomic data. We also show that Integrated Gradients sometimes performs worse than gradient-times-input, likely owing to its linear interpolation path. We additionally explore the impact of user-defined settings on the interpretation methods, such as the choice of "reference"/"baseline", and identify recommended settings for genomics. Our analysis suggests several promising directions for future research on these model interpretation methods. Code and links to data are available at https://github.com/kundajelab/interpret-benchmark.


2021 ◽  
Vol 2021 (49) ◽  
pp. 37-44
Author(s):  
I. B. Ivasiv ◽  

It has been proposed to utilize the median algorithm for determination of the extrema positions of diffuse light reflectance intensity distribution by a discrete signal of a photodiode linear array. The algorithm formula has been deduced on the base of piecewise-linear interpolation for signal representation by cumulative function. It has been shown that this formula is much simpler for implementation than known centroid algorithm and the noise immune Blais and Rioux detector algorithm. Also, the methodical systematic errors for zero noise as well as the random errors for full common mode additive noises and uncorrelated noises have been estimated and compared for mentioned algorithms. In these terms, the proposed median algorithm is proportionate to Blais and Rioux algorithm and considerably better then centroid algorithm.


Author(s):  
Kelachi P. Enwere ◽  
Uchenna P. Ogoke

Aims: The Study seeks to determine the relationship that exists among Continuous Probability Distributions and the use of Interpolation Techniques to estimate unavailable but desired value of a given probability distribution. Study Design: Statistical Probability tables for Normal, Student t, Chi-squared, F and Gamma distributions were used to compare interpolated values with statistical tabulated values. Charts and Tables were used to represent the relationships among the five probability distributions. Methodology: Linear Interpolation Technique was employed to interpolate unavailable but desired values so as to obtain approximate values from the statistical tables. The data were analyzed for interpolation of unavailable but desired values at 95% a-level from the five continuous probability distribution. Results: Interpolated values are as close as possible to the exact values and the difference between the exact value and the interpolated value is not pronounced. The table and chart established showed that relationships do exist among the Normal, Student-t, Chi-squared, F and Gamma distributions. Conclusion: Interpolation techniques can be applied to obtain unavailable but desired information in a data set. Thus, uncertainty found in a data set can be discovered, then analyzed and interpreted to produce desired results. However, understanding of how these probability distributions are related to each other can inform how best these distributions can be used interchangeably by Statisticians and other Researchers who apply statistical methods employed in practical applications.


Author(s):  
JongHyup Lee ◽  
Sungjin Kang ◽  
Wooyoung Noh ◽  
Jimyung Oh

In this paper, DFT-Based channel estimation with channel response mirroring is proposed and analyzed. In General, pilot symbols for channel estimation in MIMO(Multi-Input Multi-Output) OFDM(Orthogonal Frequency-Division Multiplexing) Systems have a diamond shape in the time-frequency plane. An interpolation technique to estimate the channel response of sub-carriers between reference symbols is needed. Various interpolation techniques such as linear interpolation, low-pass filtering interpolation, cubic interpolation and DFT interpolation are employed to estimate the non-pilot sub-carriers. In this paper, we investigate the conventional DFT-based channel estimation for noise reduction and channel response interpolation. The conventional method has performance degradation by distortion called “edge effect” or “border effect”. In order to mitigate the distortion, we propose an improved DFT-based channel estimation with channel response mirroring. This technique can efficiently mitigate the distortion caused by the DFT of channel response discontinuity. Simulation results show that the proposed method has better performance than the conventional DFT-based channel estimation in terms of MSE.


2021 ◽  
Author(s):  
Mithilesh Rajendrakumar ◽  
Manu Vyas ◽  
Prashant Deshpande ◽  
Bommaian Balasubramanian ◽  
Kevin Shepherd

Abstract When a gas-turbine engine is in operation, inlet-generated total-pressure distortion can have a detrimental effect on engine’s stability and performance. During the product development life cycle, on-ground wind tunnel tests and in-flight tests are performed to estimate the inlet distortion characteristics. Extensive measures are taken in the preparation and execution of inlet distortion tests. The data pertaining to spatial inlet distortion is recorded using an array of high-response total-pressure probes. The pressure probes are usually arranged in rake and ring arrays as per AIR1419. The data from these probes is used by propulsion system designers to address the effects of inlet distortion on stability and performance, particularly the engine’s sensitivity to inlet distortion. In some instances, the probes can produce inaccurate measurements or no measurements at all, due to a variety of reasons. This may result in a time consuming and costly process of repeating the test. To avoid this, the inaccurate or invalid measurements can be substituted using a variety of statistical techniques during test data post-processing. This paper discusses the results of different interpolation techniques to substitute invalid steady-state total-pressure measurements, evaluated in the context of classical distortion profile data available in AIR1419. The techniques include 1D linear interpolation using only probes data from adjacent rings, 1D linear interpolation using only probes data from adjacent rakes, and bilinear interpolation using probes data from adjacent rings and rakes. Furthermore, the paper evaluates a bilinear interpolation technique with optimal weights obtained from linear regression, that enhances the estimation of invalid pressure values.


Sign in / Sign up

Export Citation Format

Share Document