parametric function
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 31)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
Vol 15 (2) ◽  
pp. 1-29
Author(s):  
Paolo D'Alberto ◽  
Victor Wu ◽  
Aaron Ng ◽  
Rahul Nimaiyar ◽  
Elliott Delaye ◽  
...  

We present xDNN, an end-to-end system for deep-learning inference based on a family of specialized hardware processors synthesized on Field-Programmable Gate Array (FPGAs) and Convolution Neural Networks (CNN). We present a design optimized for low latency, high throughput, and high compute efficiency with no batching. The design is scalable and a parametric function of the number of multiply-accumulate units, on-chip memory hierarchy, and numerical precision. The design can produce a scale-down processor for embedded devices, replicated to produce more cores for larger devices, or resized to optimize efficiency. On Xilinx Virtex Ultrascale+ VU13P FPGA, we achieve 800 MHz that is close to the Digital Signal Processing maximum frequency and above 80% efficiency of on-chip compute resources. On top of our processor family, we present a runtime system enabling the execution of different networks for different input sizes (i.e., from 224× 224 to 2048× 1024). We present a compiler that reads CNNs from native frameworks (i.e., MXNet, Caffe, Keras, and Tensorflow), optimizes them, generates codes, and provides performance estimates. The compiler combines quantization information from the native environment and optimizations to feed the runtime with code as efficient as any hardware expert could write. We present tools partitioning a CNN into subgraphs for the division of work to CPU cores and FPGAs. Notice that the software will not change when or if the FPGA design becomes an ASIC, making our work vertical and not just a proof-of-concept FPGA project. We show experimental results for accuracy, latency, and power for several networks: In summary, we can achieve up to 4 times higher throughput, 3 times better power efficiency than the GPUs, and up to 20 times higher throughput than the latest CPUs. To our knowledge, we provide solutions faster than any previous FPGA-based solutions and comparable to any other top-of-the-shelves solutions.


2022 ◽  
pp. 1-12
Author(s):  
Mohammed Hamdi

With the evaluation of the software industry, a huge number of software applications are designing, developing, and uploading to multiple online repositories. To find out the same type of category and resource utilization of applications, researchers must adopt manual working. To reduce their efforts, a solution has been proposed that works in two phases. In first phase, a semantic analysis-based keywords and variables identification process has been proposed. Based on the semantics, designed a dataset having two classes: one represents application type and the other corresponds to application keywords. Afterward, in second phase, input preprocessed dataset to manifold machine learning techniques (Decision Table, Random Forest, OneR, Randomizable Filtered Classifier, Logistic model tree) and compute their performance based on TP Rate, FP Rate, Precision, Recall, F1-Score, MCC, ROC Area, PRC Area, and Accuracy (%). For evaluation purposes, I have used an R language library called latent semantic analysis for creating semantics, and the Weka tool is used for measuring the performance of algorithms. Results show that the random forest depicts the highest accuracy which is 99.3% due to its parametric function evaluation and less misclassification error.


2021 ◽  
Vol 25 (2) ◽  
pp. 239-257
Author(s):  
Stephen Haslett ◽  
Jarkko Isotalo ◽  
Simo Puntanen

In this article we consider the partitioned fixed linear model F : y = X1β1 + X2β2 + ε" and the corresponding mixed model M : y =X1β1+X2u+ ε, where ε is a random error vector and u is a random effect vector. In 2006, Isotalo, M¨ols, and Puntanen found conditions under which an arbitrary representation of the best linear unbiased estimator (BLUE) of an estimable parametric function of β1 in the fixed model F remains BLUE in the mixed model M . In this paper we extend the results concerning further equalities arising from models F and M.


Author(s):  
Toni Monleón-Getino ◽  

Survival analysis concerns the analysis of time-to-event data and it is essential to study in fields such as oncology, the survival function, S(t), calculation is usually used, but in the presence of competing risks (presence of competing events), is necessary introduce other statistical concepts and methods, as is the Cumulative incidence function CI(t). This is defined as the proportion of subjects with an event time less than or equal to. The present study describe a methodology that enables to obtain numerically a shape of CI(t) curves and estimate the benefit time points (BTP) as the time (t) when a 90, 95 or 99% is reached for the maximum value of CI(t). Once you get the numerical function of CI(t), it can be projected for an infinite time, with all the limitations that it entails. To do this task the R function Weibull.cumulative.incidence() is proposed. In a first step these function transforms the survival function (S(t)) obtained using the Kaplan–Meier method to CI(t). In a second step the best fit function of CI(t) is calculated in order to estimate BTP using two procedures, 1) Parametric function: estimates a Weibull growth curve of 4 parameters by means a non-linear regression (nls) procedure or 2) Non parametric method: using Local Polynomial Regression (LPR) or LOESS fitting. Two examples are presented and developed using Weibull.cumulative.incidence() function in order to present the method. The methodology presented will be useful for performing better tracking of the evolution of the diseases (especially in the case of the presence of competitive risks), project time to infinity and it is possible that this methodology can help identify the causes of current trends in diseases like cancer. We think that BTP points can be important in large diseases like cardiac illness or cancer to seek the inflection point of the disease, treatment associate or speculate how is the course of the disease and change the treatments at those points. These points can be important to take medical decisions furthermore.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Alireza Dibaji ◽  
Seyed Amin Bagherzadeh ◽  
Arash Karimipour

Purpose This paper aims to simulate the nanofluid forced convection in a microchannel. According to the results, at high Reynolds numbers and higher nanofluid volume fractions, an increase in the rib height and slip coefficient further improved the heat transfer rate. The ribs also affect the flow physics depending on the Reynolds number so that the slip velocity decreases with increasing the nanofluid volume fraction and rib height. Design/methodology/approach Forced heat transfer of the water–copper nanofluid is numerically studied in a two dimensional microchannel. The effects of the slip coefficient, Reynolds number, nanofluid volume fraction and rib height are investigated on the average Nusselt number, slip velocity on the microchannel wall and the performance evaluation criterion. Findings In contrast, the slip velocity increases with increasing the Reynolds number and slip coefficient. Afterwards, a non-parametric function estimation is performed relying on the artificial neural network. Originality/value Finally, the Genetic Algorithm was used to establish a set of optimal decision parameters for the problem


2021 ◽  
Vol 20 ◽  
pp. 288-299
Author(s):  
Refah Mohammed Alotaibi ◽  
Yogesh Mani Tripathi ◽  
Sanku Dey ◽  
Hoda Ragab Rezk

In this paper, inference upon stress-strength reliability is considered for unit-Weibull distributions with a common parameter under the assumption that data are observed using progressive type II censoring. We obtain di_erent estimators of system reliability using classical and Bayesian procedures. Asymptotic interval is constructed based on Fisher information matrix. Besides, boot-p and boot-t intervals are also obtained. We evaluate Bayes estimates using Lindley's technique and Metropolis-Hastings (MH) algorithm. The Bayes credible interval is evaluated using MH method. An unbiased estimator of this parametric function is also obtained under know common parameter case. Numerical simulations are performed to compare estimation methods. Finally, a data set is studied for illustration purposes.


2021 ◽  
Vol 36 (2) ◽  
pp. 190-198
Author(s):  
Samuel Kingston ◽  
Hunter Ellis ◽  
Mashad Saleh ◽  
Evan Benoit ◽  
Ayobami Edun ◽  
...  

In this paper, we present a method for estimating complex impedances using reflectometry and a modified steepest descent inversion algorithm. We simulate spread spectrum time domain reflectometry (SSTDR), which can measure complex impedances on energized systems for an experimental setup with resistive and capacitive loads. A parametric function, which includes both a misfit function and stabilizer function, is created. The misfit function is a least squares estimate of how close the model data matches observed data. The stabilizer function prevents the steepest descent algorithm from becoming unstable and diverging. Steepest descent iteratively identifies the model parameters that minimize the parametric function. We validate the algorithm by correctly identifying the model parameters (capacitance and resistance) associated with simulated SSTDR data, with added 3 dB white Gaussian noise. With the stabilizer function, the steepest descent algorithm estimates of the model parameters are bounded within a specified range. The errors for capacitance (220pF to 820pF) and resistance (50 Ω to 270 Ω) are < 10%, corresponding to a complex impedance magnitude |R +1/jωC| of 53 Ω to 510 Ω.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zheng Wang ◽  
Yuexin Wu ◽  
Yang Bao ◽  
Jing Yu ◽  
Xiaohui Wang

Network embedding that learns representations of network nodes plays a critical role in network analysis, since it enables many downstream learning tasks. Although various network embedding methods have been proposed, they are mainly designed for a single network scenario. This paper considers a “multiple network” scenario by studying the problem of fusing the node embeddings and incomplete attributes from two different networks. To address this problem, we propose to complement the incomplete attributes, so as to conduct data fusion via concatenation. Specifically, we first propose a simple inductive method, in which attributes are defined as a parametric function of the given node embedding vectors. We then propose its transductive variant by adaptively learning an adjacency graph to approximate the original network structure. Additionally, we also provide a light version of this transductive variant. Experimental results on four datasets demonstrate the superiority of our methods.


Metrika ◽  
2021 ◽  
Author(s):  
Frank P. A. Coolen ◽  
Abdullah A. H. Ahmadini ◽  
Tahani Coolen-Maturi

AbstractThis paper presents an imprecise predictive inference method for accelerated life testing. The method is largely nonparametric, with a basic parametric function to link different stress levels. The log-rank test is used to provide imprecision for the link function parameter, which in turn provides robustness in the resulting lower and upper survival functions for a future observation at the normal stress level. An application using data from the literature is presented, and simulations show the performance and robustness of the method. In case of model misspecification, robustness may be achieved at the price of large imprecision, which would emphasize the need for more data or further model assumptions.


Sign in / Sign up

Export Citation Format

Share Document