scholarly journals Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

Open Physics ◽  
2017 ◽  
Vol 15 (1) ◽  
pp. 992-996 ◽  
Author(s):  
Jin Li ◽  
Zilong Liu

AbstractNonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


2021 ◽  
Author(s):  
Adyn Miles ◽  
Mahdi S. Hosseini ◽  
Sheyang Tang ◽  
Zhou Wang ◽  
Savvas Damaskinos ◽  
...  

Abstract Out-of-focus sections of whole slide images are a significant source of false positives and other systematic errors in clinical diagnoses. As a result, focus quality assessment (FQA) methods must be able to quickly and accurately differentiate between focus levels in a scan. Recently, deep learning methods using convolutional neural networks (CNNs) have been adopted for FQA. However, the biggest obstacles impeding their wide usage in clinical workflows are their generalizability across different test conditions and their potentially high computational cost. In this study, we focus on the transferability and scalability of CNN-based FQA approaches. We carry out an investigation on ten architecturally diverse networks using five datasets with stain and tissue diversity. We evaluate the computational complexity of each network and scale this to realistic applications involving hundreds of whole slide images. We assess how well each full model transfers to a separate, unseen dataset without fine-tuning. We show that shallower networks transfer well when used on small input patch sizes, while deeper networks work more effectively on larger inputs. Furthermore, we introduce neural architecture search (NAS) to the field and learn an automatically designed low-complexity CNN architecture using differentiable architecture search which achieved competitive performance relative to established CNNs.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Jaime Lien ◽  
Ulric J. Ferner ◽  
Warakorn Srichavengsup ◽  
Henk Wymeersch ◽  
Moe Z. Win

Location awareness is a key enabling feature and fundamental challenge in present and future wireless networks. Most existing localization methods rely on existing infrastructure and thus lack the flexibility and robustness necessary for large ad hoc networks. In this paper, we build upon SPAWN (sum-product algorithm over a wireless network), which determines node locations through iterative message passing, but does so at a high computational cost. We compare different message representations for SPAWN in terms of performance and complexity and investigate several types of cooperation based on censoring. Our results, based on experimental data with ultra-wideband (UWB) nodes, indicate that parametric message representation combined with simple censoring can give excellent performance at relatively low complexity.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2019
Author(s):  
Talal Alharbi ◽  
Marcos Tostado-Véliz ◽  
Omar Alrumayh ◽  
Francisco Jurado

Recently, the high-order Newton-like methods have gained popularity for solving power flow problems due to their simplicity, versatility and, in some cases, efficiency. In this context, recent research studied the applicability of the 4th order Jarrat’s method as applied to power flow calculation (PFC). Despite the 4th order of convergence of this technique, it is not competitive with the conventional solvers due to its very high computational cost. This paper addresses this issue by proposing two efficient modifications of the 4th order Jarrat’s method, which present the fourth and sixth order of convergence. In addition, continuous versions of the new proposals and the 4th order Jarrat’s method extend their applicability to ill-conditioned cases. Extensive results in multiple realistic power networks serve to sow the performance of the developed solvers. Results obtained in both well and ill-conditioned cases are promising.


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


1995 ◽  
Vol 32 (2) ◽  
pp. 95-103
Author(s):  
José A. Revilla ◽  
Kalin N. Koev ◽  
Rafael Díaz ◽  
César Álvarez ◽  
Antonio Roldán

One factor in determining the transport capacity of coastal interceptors in Combined Sewer Systems (CSS) is the reduction of Dissolved Oxygen (DO) in coastal waters originating from the overflows. The study of the evolution of DO in coastal zones is complex. The high computational cost of using mathematical models discriminates against the required probabilistic analysis being undertaken. Alternative methods, based on such mathematical modelling, employed in a limited number of cases, are therefore needed. In this paper two alternative methods are presented for the study of oxygen deficit resulting from overflows of CSS. In the first, statistical analyses focus on the causes of the deficit (the volume discharged). The second concentrates on the effects (the concentrations of oxygen in the sea). Both methods have been applied in a study of the coastal interceptor at Pasajes Estuary (Guipúzcoa, Spain) with similar results.


Author(s):  
Teresa V.V ◽  
Anand. B

Objective: In this research work presents an efficient way Carry Select Adder (CSLA) performance and estimation. The CSLA is utilized in several system to mitigate the issue of carry propagation delay that is happens by severally generating various carries and to get the sum, select a carry because of the uses of various pairs of RCA to provide the sum of the partial section also carry by consisting carry input but the CSLA isn't time economical, then by the multiplexers extreme total and carry is chosen in the selected section. Methodology: The fundamental plan of this work is to attain maximum speed and minimum power consumption by using Binary to Excess-1. Convertor rather than RCA within the regular CSLA. Here RCA denotes the Ripple Carry Adder section. At the span to more cut back the facility consumption, a method of CSLA with D LATCH is implemented during this research work. The look of Updated Efficient Area -Carry Select Adder (UEA-CSLA) is evaluated and intended in XILINX ISE design suite 14. 5 tools. This VLSI arrangement is utilized in picture preparing application by concluding the cerebrum tumor discovery. Conclusion: In this study, medicinal pictures estimation, investigation districts in the multi phantom picture isn't that much proficient to defeat this disadvantage here utilized hyper spectral picture method is presented a sifting procedure in VLSI innovation restriction of cerebrum tumor is performed Updated Efficient Area - Carry Select Adder propagation result dependent on Matrix Laboratory in the adaptation of R2018b.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 891
Author(s):  
Aurea Grané ◽  
Alpha A. Sow-Barry

This work provides a procedure with which to construct and visualize profiles, i.e., groups of individuals with similar characteristics, for weighted and mixed data by combining two classical multivariate techniques, multidimensional scaling (MDS) and the k-prototypes clustering algorithm. The well-known drawback of classical MDS in large datasets is circumvented by selecting a small random sample of the dataset, whose individuals are clustered by means of an adapted version of the k-prototypes algorithm and mapped via classical MDS. Gower’s interpolation formula is used to project remaining individuals onto the previous configuration. In all the process, Gower’s distance is used to measure the proximity between individuals. The methodology is illustrated on a real dataset, obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE), which was carried out in 19 countries and represents over 124 million aged individuals in Europe. The performance of the method was evaluated through a simulation study, whose results point out that the new proposal solves the high computational cost of the classical MDS with low error.


Author(s):  
Seyede Vahide Hashemi ◽  
Mahmoud Miri ◽  
Mohsen Rashki ◽  
Sadegh Etedali

This paper aims to carry out sensitivity analyses to study how the effect of each design variable on the performance of self-centering buckling restrained brace (SC-BRB) and the corresponding buckling restrained brace (BRB) without shape memory alloy (SMA) rods. Furthermore, the reliability analyses of BRB and SC-BRB are performed in this study. Considering the high computational cost of the simulation methods, three Meta-models including the Kriging, radial basis function (RBF), and polynomial response surface (PRSM) are utilized to construct the surrogate models. For this aim, the nonlinear dynamic analyses are conducted on both BRB and SC-BRB by using OpenSees software. The results showed that the SMA area, SMA length ratio, and BRB core area have the most effect on the failure probability of SC-BRB. It is concluded that Kriging-based Monte Carlo Simulation (MCS) gives the best performance to estimate the limit state function (LSF) of BRB and SC-BRB in the reliability analysis procedures. Considering the effects of changing the maximum cyclic loading on the failure probability computation and comparison of the failure probability for different LSFs, it is also found that the reliability indices of SC-BRB were always higher than the corresponding reliability indices determined for BRB which confirms the performance superiority of SC-BRB than BRB.


2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Sign in / Sign up

Export Citation Format

Share Document