computation method
Recently Published Documents


TOTAL DOCUMENTS

936
(FIVE YEARS 191)

H-INDEX

24
(FIVE YEARS 5)

2022 ◽  
pp. 0272989X2110730
Author(s):  
Anna Heath

Background The expected value of sample information (EVSI) calculates the value of collecting additional information through a research study with a given design. However, standard EVSI analyses do not account for the slow and often incomplete implementation of the treatment recommendations that follow research. Thus, standard EVSI analyses do not correctly capture the value of the study. Previous research has developed measures to calculate the research value while adjusting for implementation challenges, but estimating these measures is a challenge. Methods Based on a method that assumes the implementation level is related to the strength of evidence in favor of the treatment, 2 implementation-adjusted EVSI calculation methods are developed. These novel methods circumvent the need for analytical calculations, which were restricted to settings in which normality could be assumed. The first method developed in this article uses computationally demanding nested simulations, based on the definition of the implementation-adjusted EVSI. The second method is based on adapting the moment matching method, a recently developed efficient EVSI computation method, to adjust for imperfect implementation. The implementation-adjusted EVSI is then calculated with the 2 methods across 3 examples. Results The maximum difference between the 2 methods is at most 6% in all examples. The efficient computation method is between 6 and 60 times faster than the nested simulation method in this case study and could be used in practice. Conclusions This article permits the calculation of an implementation-adjusted EVSI using realistic assumptions. The efficient estimation method is accurate and can estimate the implementation-adjusted EVSI in practice. By adapting standard EVSI estimation methods, adjustments for imperfect implementation can be made with the same computational cost as a standard EVSI analysis. Highlights Standard expected value of sample information (EVSI) analyses do not account for the fact that treatment implementation following research is often slow and incomplete, meaning they incorrectly capture the value of the study. Two methods, based on nested Monte Carlo sampling and the moment matching EVSI calculation method, are developed to adjust EVSI calculations for imperfect implementation when the speed and level of the implementation of a new treatment depends on the strength of evidence in favor of the treatment. The 2 methods we develop provide similar estimates for the implementation-adjusted EVSI. Our methods extend current EVSI calculation algorithms and thus require limited additional computational complexity.


2021 ◽  
Author(s):  
Wei Liu ◽  
Xu Liao ◽  
Xiang Zhou ◽  
Xingjie Shi ◽  
Jin Liu

Dimension reduction and (spatial) clustering are two key steps for the analysis of both single-cell RNA-sequencing (scRNA-seq) and spatial transcriptomics data collected from different platforms. Most existing methods perform dimension reduction and (spatial) clustering sequentially, treating them as two consecutive stages in tandem analysis. However, the low-dimensional embeddings estimated in the dimension reduction step may not necessarily be relevant to the class labels inferred in the clustering step and thus may impair the performance of the clustering and other downstream analysis. Here, we develop a computation method, DR-SC, to perform both dimension reduction and (spatial) clustering jointly in a unified framework. Joint analysis in DR-SC ensures accurate (spatial) clustering results and effective extraction of biologically informative low-dimensional features. Importantly, DR-SC is not only applicable for cell type clustering in scRNA-seq studies but also applicable for spatial clustering in spatial transcriptimics that characterizes the spatial organization of the tissue by segregating it into multiple tissue structures. For spatial transcriptoimcs analysis, DR-SC relies on an underlying latent hidden Markov random field model to encourage the spatial smoothness of the detected spatial cluster boundaries. We also develop an efficient expectation-maximization algorithm based on an iterative conditional mode. DR-SC is not only scalable to large sample sizes, but is also capable of optimizing the spatial smoothness parameter in a data-driven manner. Comprehensive simulations show that DR-SC outperforms existing clustering methods such as Seurat and spatial clustering methods such as BayesSpace and SpaGCN and extracts more biologically relevant features compared to the conventional dimension reduction methods such as PCA and scVI. Using 16 benchmark scRNA-seq datasets, we demonstrate that the low-dimensional embeddings and class labels estimated from DR-SC lead to improved trajectory inference. In addition, analyzing three published scRNA-seq and spatial transcriptomics data in three platforms, we show DR-SC can improve both the spatial and non-spatial clustering performance, resolving a low-dimensional representation with improved visualization, and facilitate the downstream analysis such as trajectory inference.


2021 ◽  
Author(s):  
Prasad Sankaravel ◽  
M. Meenakshi ◽  
P. Hanumantha Rao

Shaped and multiple beams using planar phased arrays. This paper presents a new synthesis and computation method to generate user-specified multiple beams and shaped beams in any arbitrary 3D space. The computation method can generate independently controllable simultaneous multiple beams with arbitrary peak powers. This method is extended to generate arbitrarily shaped beams, using a combination of optimally placed multiple beams at appropriate locations with specific power ratios.<br>


2021 ◽  
Author(s):  
Prasad Sankaravel ◽  
M. Meenakshi ◽  
P. Hanumantha Rao

Shaped and multiple beams using planar phased arrays. This paper presents a new synthesis and computation method to generate user-specified multiple beams and shaped beams in any arbitrary 3D space. The computation method can generate independently controllable simultaneous multiple beams with arbitrary peak powers. This method is extended to generate arbitrarily shaped beams, using a combination of optimally placed multiple beams at appropriate locations with specific power ratios.<br>


2021 ◽  
Vol 8 (12) ◽  
Author(s):  
Talal Tayel Al Mashqabah

<p>The present study aimed to identify the effect of mental computation in improving the ability of university undergraduates in mathematical problem - solving. To achieve this aim, the quasi - experimental approach was used. Participants were (80) preparatory year students at Najran University in Saudi Arabia who were distributed into two main groups using the simple random sampling. One of these groups (N = 40) was assigned as the experimental group and was taught via the use of mental computation method. The second group (N = 40) was assigned as the control group and studied through the use of traditional methods. Mathematical problem - solving ability pre-test and posttest were applied to both groups. Findings showed that there were statistically significant differences between the two groups with regard to the mathematical problem- solving posttest in favor of the experimental group.</p><p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0994/a.php" alt="Hit counter" /></p>


Water ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 3122
Author(s):  
Leonardo Primavera ◽  
Emilia Florio

The possibility to create a flood wave in a river network depends on the geometric properties of the river basin. Among the models that try to forecast the Instantaneous Unit Hydrograph (IUH) of rainfall precipitation, the so-called Multifractal Instantaneous Unit Hydrograph (MIUH) by De Bartolo et al. (2003) rather successfully connects the multifractal properties of the river basin to the observed IUH. Such properties can be assessed through different types of analysis (fixed-size algorithm, correlation integral, fixed-mass algorithm, sandbox algorithm, and so on). The fixed-mass algorithm is the one that produces the most precise estimate of the properties of the multifractal spectrum that are relevant for the MIUH model. However, a disadvantage of this method is that it requires very long computational times to produce the best possible results. In a previous work, we proposed a parallel version of the fixed-mass algorithm, which drastically reduced the computational times almost proportionally to the number of Central Processing Unit (CPU) cores available on the computational machine by using the Message Passing Interface (MPI), which is a standard for distributed memory clusters. In the present work, we further improved the code in order to include the use of the Open Multi-Processing (OpenMP) paradigm to facilitate the execution and improve the computational speed-up on single processor, multi-core workstations, which are much more common than multi-node clusters. Moreover, the assessment of the multifractal spectrum has also been improved through a direct computation method. Currently, to the best of our knowledge, this code represents the state-of-the-art for a fast evaluation of the multifractal properties of a river basin, and it opens up a new scenario for an effective flood forecast in reasonable computational times.


2021 ◽  
Vol 9 ◽  
Author(s):  
Yuandong Li ◽  
Bing Hao ◽  
Xiaojun Li ◽  
Liguo Jin ◽  
Qing Dong ◽  
...  

The determination of overflow boundary is a prerequisite for the accurate solution of the seepage field by the finite element method. In this paper, a method for solving overflow boundary according to the maximum value of horizontal energy loss rate is proposed, which based on the analysis of the physical meaning of functional and the water head distribution of seepage field under different overflow boundaries. This method considers that the overflow boundary that makes the horizontal energy loss rate reach the maximum value is the real boundary overflow. Compared with the previous iterative computation method of overflow point and free surface, the method of solving overflow boundary based on the maximum horizontal energy loss rate does not need iteration, so the problem of non-convergence does not exist. The relative error of the overflow points is only 1.54% and 0.98% by calculating the two-dimensional model of the glycerol test and the three-dimensional model of the electric stimulation test, respectively. Compared with the overflow boundary calculated by the node virtual flow method, improved cut-off negative pressure method, initial flow method, and improved discarding element method, this method has a higher accuracy.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2704
Author(s):  
Mengyu An ◽  
Yuanyong Luo ◽  
Muhan Zheng ◽  
Yuxuan Wang ◽  
Hongxi Dong ◽  
...  

This paper proposes a novel Piecewise Parabolic Approximate Computation method for hardware function evaluation, which mainly incorporates an error-flattened segmenter and an implementation quantizer. Under a required software maximum absolute error (MAE), the segmenter adaptively selects a minimum number of parabolas to approximate the objective function. By completely imitating the circuit’s behavior before actual implementation, the quantizer calculates the minimum quantization bit width to ensure a non-redundant fixed-point hardware architecture with an MAE of 1 unit of least precision (ulp), eliminating the iterative design time for the circuits. The method causes the number of segments to reach the theoretical limit, and has great advantages in the number of segments and the size of the look-up table (LUT). To prove the superiority of the proposed method, six common functions were implemented by the proposed method under TSMC-90 nm technology. Compared to the state-of-the-art piecewise quadratic approximation methods, the proposed method has advantages in the area with roughly the same delay. Furthermore, a unified function-evaluation unit was also implemented under TSMC-90 nm technology.


Sign in / Sign up

Export Citation Format

Share Document