A PARALLEL RAYCAST ALGORITHM OF CSG MODELS ON CM2

1993 ◽  
Vol 04 (01) ◽  
pp. 29-40
Author(s):  
PIERO PILI

One of the main problems for CSG models manipulators is the fast visualization of the results. The high computational cost on single-processor architectures makes the CSG scheme useless for the interactive creation of the model. This article deals with a parallel algorithm based on general purpose SIMD architecture, such as the Connection Machine 2, for the visualization of high quality shaded images of CSG models in nearly real time. The technique is based on a pixels parallelization of the Ray Casting algorithm. The Frame Buffer is divided into severals regions and for each of them we determine the CSG reduced model projected on it. Then a processor is assigned to each pixel in the sub-area; it determines the equation of the ray crossing the pixel and, using the Ray Casting technique, the nearest intersection point to the pixel on which it calculates the illumination model.

2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


1995 ◽  
Vol 32 (2) ◽  
pp. 95-103
Author(s):  
José A. Revilla ◽  
Kalin N. Koev ◽  
Rafael Díaz ◽  
César Álvarez ◽  
Antonio Roldán

One factor in determining the transport capacity of coastal interceptors in Combined Sewer Systems (CSS) is the reduction of Dissolved Oxygen (DO) in coastal waters originating from the overflows. The study of the evolution of DO in coastal zones is complex. The high computational cost of using mathematical models discriminates against the required probabilistic analysis being undertaken. Alternative methods, based on such mathematical modelling, employed in a limited number of cases, are therefore needed. In this paper two alternative methods are presented for the study of oxygen deficit resulting from overflows of CSS. In the first, statistical analyses focus on the causes of the deficit (the volume discharged). The second concentrates on the effects (the concentrations of oxygen in the sea). Both methods have been applied in a study of the coastal interceptor at Pasajes Estuary (Guipúzcoa, Spain) with similar results.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 891
Author(s):  
Aurea Grané ◽  
Alpha A. Sow-Barry

This work provides a procedure with which to construct and visualize profiles, i.e., groups of individuals with similar characteristics, for weighted and mixed data by combining two classical multivariate techniques, multidimensional scaling (MDS) and the k-prototypes clustering algorithm. The well-known drawback of classical MDS in large datasets is circumvented by selecting a small random sample of the dataset, whose individuals are clustered by means of an adapted version of the k-prototypes algorithm and mapped via classical MDS. Gower’s interpolation formula is used to project remaining individuals onto the previous configuration. In all the process, Gower’s distance is used to measure the proximity between individuals. The methodology is illustrated on a real dataset, obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE), which was carried out in 19 countries and represents over 124 million aged individuals in Europe. The performance of the method was evaluated through a simulation study, whose results point out that the new proposal solves the high computational cost of the classical MDS with low error.


Author(s):  
Seyede Vahide Hashemi ◽  
Mahmoud Miri ◽  
Mohsen Rashki ◽  
Sadegh Etedali

This paper aims to carry out sensitivity analyses to study how the effect of each design variable on the performance of self-centering buckling restrained brace (SC-BRB) and the corresponding buckling restrained brace (BRB) without shape memory alloy (SMA) rods. Furthermore, the reliability analyses of BRB and SC-BRB are performed in this study. Considering the high computational cost of the simulation methods, three Meta-models including the Kriging, radial basis function (RBF), and polynomial response surface (PRSM) are utilized to construct the surrogate models. For this aim, the nonlinear dynamic analyses are conducted on both BRB and SC-BRB by using OpenSees software. The results showed that the SMA area, SMA length ratio, and BRB core area have the most effect on the failure probability of SC-BRB. It is concluded that Kriging-based Monte Carlo Simulation (MCS) gives the best performance to estimate the limit state function (LSF) of BRB and SC-BRB in the reliability analysis procedures. Considering the effects of changing the maximum cyclic loading on the failure probability computation and comparison of the failure probability for different LSFs, it is also found that the reliability indices of SC-BRB were always higher than the corresponding reliability indices determined for BRB which confirms the performance superiority of SC-BRB than BRB.


Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 511
Author(s):  
Syed Mohammad Minhaz Hossain ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Proper plant leaf disease (PLD) detection is challenging in complex backgrounds and under different capture conditions. For this reason, initially, modified adaptive centroid-based segmentation (ACS) is used to trace the proper region of interest (ROI). Automatic initialization of the number of clusters (K) using modified ACS before recognition increases tracing ROI’s scalability even for symmetrical features in various plants. Besides, convolutional neural network (CNN)-based PLD recognition models achieve adequate accuracy to some extent. However, memory requirements (large-scaled parameters) and the high computational cost of CNN-based PLD models are burning issues for the memory restricted mobile and IoT-based devices. Therefore, after tracing ROIs, three proposed depth-wise separable convolutional PLD (DSCPLD) models, such as segmented modified DSCPLD (S-modified MobileNet), segmented reduced DSCPLD (S-reduced MobileNet), and segmented extended DSCPLD (S-extended MobileNet), are utilized to represent the constructive trade-off among accuracy, model size, and computational latency. Moreover, we have compared our proposed DSCPLD recognition models with state-of-the-art models, such as MobileNet, VGG16, VGG19, and AlexNet. Among segmented-based DSCPLD models, S-modified MobileNet achieves the best accuracy of 99.55% and F1-sore of 97.07%. Besides, we have simulated our DSCPLD models using both full plant leaf images and segmented plant leaf images and conclude that, after using modified ACS, all models increase their accuracy and F1-score. Furthermore, a new plant leaf dataset containing 6580 images of eight plants was used to experiment with several depth-wise separable convolution models.


Author(s):  
Cengiz Yeker ◽  
Ibrahim Zeid

Abstract A fully automatic three-dimensional mesh generation method is developed by modifying the well-known ray casting technique. The method is capable of meshing objects modeled using the CSG representation scheme. The input to the method consists of solid geometry information, and mesh attributes such as element size. The method starts by casting rays in 3D space to classify the empty and full parts of the solid. This information is then used to create a cell structure that closely models the solid object. The next step is to further process the cell structure to make it more succinct, so that the cells close to the boundary of the solid object can model the topology with enough fidelity. Moreover, neighborhood relations between cells in the structure are developed and implemented. These relations help produce better conforming meshes. Each cell in the cell structure is identified with respect to a set of pre-defined types of cells. After the identification process, a normalization process is developed and applied to the cell structure in order to ensure that the finite elements generated from each cell conform to each other and to other elements produced from neighboring cells. The last step is to mesh each cell in the structure with valid finite elements.


Sign in / Sign up

Export Citation Format

Share Document