decomposition structure
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Huiping Guo ◽  
Hongru Li

AbstractDecomposition hybrid algorithms with the recursive framework which recursively decompose the structural task into structural subtasks to reduce computational complexity are employed to learn Bayesian network (BN) structure. Merging rules are commonly adopted as the combination method in the combination step. The direction determination rule of merging rules has problems in using the idea of keeping v-structures unchanged before and after combination to determine directions of edges in the whole structure. It breaks down in one case due to appearances of wrong v-structures, and is hard to operate in practice. Therefore, we adopt a novel approach for direction determination and propose a two-stage combination method. In the first-stage combination method, we determine nodes, links of edges by merging rules and adopt the idea of permutation and combination to determine directions of contradictory edges. In the second-stage combination method, we restrict edges between nodes that do not satisfy the decomposition property and their parent nodes by determining the target domain according to the decomposition property. Simulation experiments on four networks show that the proposed algorithm can obtain BN structure with higher accuracy compared with other algorithms. Finally, the proposed algorithm is applied to the thickening process of gold hydrometallurgy to solve the practical problem.


2021 ◽  
Author(s):  
Niloofar Borhani ◽  
Jafar Ghaisari ◽  
Maryam Abedi ◽  
Marzieh Kamali ◽  
Yousof Gheisari

Abstract Despite enormous achievements in production of high throughput datasets, constructing comprehensive maps of interactions remains a major challenge. The lack of sufficient experimental evidence on interactions is more significant for heterogeneous molecular types. Hence, developing strategies to predict inter-omics connections is essential to construct holistic maps of disease. Here, Data Integration with Deep Learning (DIDL), a novel nonlinear deep learning method is proposed to predict inter-omics interactions. It consists of an encoder that automatically extracts features for biomolecules according to existing interactions, and a decoder that predicts novel interactions. The applicability of DIDL is assessed with different networks namely drug-target protein, transcription factor-DNA element, and miRNA-mRNA. Also, the validity of novel predictions is assessed by literature surveys. Furthermore, DIDL outperformed state-of-the-art methods. Area under the curve, and area under the precision-recall curve for all three networks were more than 0.85 and 0.83, respectively. DIDL has several advantages like automatic feature extraction from raw data, end-to-end training, and robustness to sparsity. In addition, tensor decomposition structure, predictions solely based on existing interactions and biochemical data independence makes DIDL applicable for a variety of biological networks. DIDL paves the way to understand the underlying mechanisms of complex disorders through constructing integrative networks.


Author(s):  
Sergei Osadchy ◽  
Nataliia Demska ◽  
Yuriy Oleksandrov ◽  
Viktoriia Nevliudova

The development of cyber-physical production systems is a complex scientific and technical task, therefore the developer needs to determine the requirements, tasks for the system being developed and choose an architectural model for its implementation. In turn, the choice of an architectural model assumes a balance for the set of requirements of persons interested in its development. In a typical case, the development of a specific cyber-physical industrial systems needs to be adapted to the means of implementation, to the realities of its future use, maintenance and evolution. Subject matter of this study are architectural models for building complex cyber-physical production systems. Goal of this article is a study of architectural models DIKW and 5C, according to the results of the decomposition of which, in the future, it will be possible to carry out a mathematical description of elementary problems of each level and their physical or simulation modeling. To achieve this goal, it is necessary to solve the following tasks: analyze the DIKW model; analyze the architectural model 5C; compare the DIKW model and the 5C architectural model, using its structural decomposition into levels, information and command channels with feedback within each structure.  The research carried out is based on the methods of decomposition and formalized representation of systems. Conclusions: Based on the results of the decomposition at each structural level of the DIKW and 5C models, a decomposition structure was developed, which shows the main differences and general similarities of the models. It was revealed that the 5C model, as a common software shell that combines integrated sensors and actuators, is more suitable for solving problems of developing a cyber-physical production system, and the DIKW interpretation model is more suitable for solving problems of modifying existing systems at enterprises, and the choice of the model itself the development of a cyber-physical production system depends on the requirements of the customer, existing equipment, the level of its automation and the level of project financing.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Yuhao Chen ◽  
Alexander Wong ◽  
Yuan Fang ◽  
Yifan Wu ◽  
Linlin Xu

Multi-scale image decomposition (MID) is a fundamental task in computer vision and image processing that involves the transformation of an image into a hierarchical representation comprising of different levels of visual granularity from coarse structures to fine details. A well-engineered MID disentangles the image signal into meaningful components which can be used in a variety of applications such as image denoising, image compression, and object classification. Traditional MID approaches such as wavelet transforms tackle the problem through carefully designed basis functions under rigid decomposition structure assumptions. However, as the information distribution varies from one type of image content to another, rigid decomposition assumptions lead to inefficiently representation, i.e., some scales can contain little to no information. To address this issue, we present Deep Residual Transform (DRT), a data-driven MID strategy where the input signal is transformed into a hierarchy of non-linear representations at different scales, with each representation being independently learned as the representational residual of previous scales at a user-controlled detail level. As such, the proposed DRT progressively disentangles scale information from the original signal by sequentially learning residual representations. The decomposition flexibility of this approach allows for highly tailored representations cater to specific types of image content, and results in greater representational efficiency and compactness. In this study, we realize the proposed transform by leveraging a hierarchy of sequentially trained autoencoders. To explore the efficacy of the proposed DRT, we leverage two datasets comprising of very different types of image content: 1) CelebFaces and 2) Cityscapes. Experimental results show that the proposed DRT achieved highly efficient information decomposition on both datasets amid their very different visual granularity characteristics.


2021 ◽  
Author(s):  
Madeleine Murphy ◽  
Scott D. Taylor ◽  
Aaron S. Meyer

AbstractSystems serology measurements provide a comprehensive view of humoral immunity by profiling both the antigen-binding and Fc properties of antibodies. Identifying patterns in these measurements will help to guide vaccine and therapeutic antibody development, and improve our understanding of disorders. Furthermore, consistent patterns across diseases may reflect conserved regulatory mechanisms; recognizing these may help to combine modalities such as vaccines, antibody-based interventions, and other immunotherapies to maximize protection. A common feature of systems serology studies is structured biophysical profiling across disease-relevant antigen targets, properties of antibodies’ interaction with the immune system, and serological samples. These are typically produced alongside additional measurements that are not antigen-specific. Here, we report a new form of tensor factorization, total tensor-matrix factorization (TMTF), which can greatly reduce these data into consistently observed patterns by recognizing the structure of these data. We use a previous study of HIV-infected subjects as an example. TMTF outperforms standard methods like principal components analysis in the extent of reduction possible. Data reduction, in turn, improves the prediction of immune functional responses, classification of subjects based on their HIV control status, and interpretation of these resulting models. Interpretability is improved specifically through further data reduction, separation of the constant region from antigen-binding effects, and recognizing consistent patterns across individual measurements. Therefore, we propose that TMTF will be an effective general strategy for exploring and using systems serology.Summary pointsStructured decomposition provides substantial data reduction without loss of information.Predictions based on decomposed factors are accurate and robust to missing measurements.Decomposition structure improves the interpretability of modeling results.Decomposed factors represent meaningful patterns in the HIV humoral response.


2021 ◽  
Vol 31 (1) ◽  
Author(s):  
Manickam Vadivukarasi ◽  
Kaliappan Kalidass

In this paper, we consider an M/M/1 queue where beneficiary visits occur singly. Once the beneficiary level in the system becomes zero, the server takes a vacation immediately. If the server finds no beneficiaries in the system, then the server is allowed to take another vacation after the return from the vacation. This process continues until the server has exhaustively taken all the J vacations. The closed form transient solution of the considered model and some important time dependent performance measures are obtained. Further, the steady state system size distribution is obtained from the time-dependent solution. A stochastic decomposition structure of waiting time distribution and expression for the additional waiting time due to the presence of server vacations are studied. Numerical assessments are presented.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Veena Goswami ◽  
Gopinath Panda

<p style='text-indent:20px;'>We consider a discrete-time infinite buffer renewal input queue with multiple vacations and synchronized abandonment. Waiting customers get impatient during the server's vacation and decide whether to take service or abandon simultaneously at the vacation completion instants. Using the supplementary variable technique and difference operator method, we obtain an explicit expression to find the steady-state system-length distributions at pre-arrival, random, and outside observer's observation epochs. We provide the stochastic decomposition structure for the number of customers and discuss the various performance measures. With the help of numerical experiments, we show that the method formulated in this work is analytically elegant and computationally tractable. The results are appropriate for light-tailed inter-arrival distributions and can also be leveraged to find heavy-tailed inter-arrival distributions.</p>


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Xue-Bo Jin ◽  
Hong-Xing Wang ◽  
Xiao-Yi Wang ◽  
Yu-Ting Bai ◽  
Ting-Li Su ◽  
...  

The power load prediction is significant in a sustainable power system, which is the key to the energy system’s economic operation. An accurate prediction of the power load can provide a reliable decision for power system planning. However, it is challenging to predict the power load with a single model, especially for multistep prediction, because the time series load data have multiple periods. This paper presents a deep hybrid model with a serial two‐level decomposition structure. First, the power load data are decomposed into components; then, the gated recurrent unit (GRU) network, with the Bayesian optimization parameters, is used as the subpredictor for each component. Last, the predictions of different components are fused to achieve the final predictions. The power load data of American Electric Power (AEP) were used to verify the proposed predictor. The results showed that the proposed prediction method could effectively improve the accuracy of power load prediction.


2020 ◽  
Vol 12 (4) ◽  
pp. 1433 ◽  
Author(s):  
Xue-Bo Jin ◽  
Xing-Hong Yu ◽  
Xiao-Yi Wang ◽  
Yu-Ting Bai ◽  
Ting-Li Su ◽  
...  

Based on the collected weather data from the agricultural Internet of Things (IoT) system, changes in the weather can be obtained in advance, which is an effective way to plan and control sustainable agricultural production. However, it is not easy to accurately predict the future trend because the data always contain complex nonlinear relationship with multiple components. To increase the prediction performance of the weather data in the precision agriculture IoT system, this study used a deep learning predictor with sequential two-level decomposition structure, in which the weather data were decomposed into four components serially, then the gated recurrent unit (GRU) networks were trained as the sub-predictors for each component. Finally, the results from GRUs were combined to obtain the medium- and long-term prediction result. The experiments were verified for the proposed model based on weather data from the IoT system in Ningxia, China, for wolfberry planting, in which the prediction results showed that the proposed predictor can obtain the accurate prediction of temperature and humidity and meet the needs of precision agricultural production.


Energies ◽  
2019 ◽  
Vol 12 (22) ◽  
pp. 4389 ◽  
Author(s):  
Stefano Lodetti ◽  
Jorge Bruna ◽  
Julio J. Melero ◽  
José F. Sanz

This paper presents the validation and characterization of a wavelet based decomposition method for the assessment of harmonic distortion in power systems, under stationary and non-stationary conditions. It uses Wavelet Packet Decomposition with Butterworth Infinite Impulse Response filters and a decomposition structure, which allows the measurement of both odd and even harmonics, up to the 63rd order, fully compliant with the requirements of the IEC 61000-4-7 standard. The method is shown to fulfil the IEC accuracy requirements for stationary harmonics, obtaining the same accuracy even under fluctuating conditions. Then, it is validated using simulated signals with real harmonic content. The proposed method is proven to be fully equivalent to Fourier analysis under stationary conditions, being often more accurate. Under non-stationary conditions, instead, it provides significantly higher accuracy, while the IEC strategy produces large errors. Lastly, the method is tested with real current and voltage signals, measured in conditions of high harmonic distortion. The proposed strategy provides a method with superior performance for fluctuating harmonics, but at the same time IEC compliant under stationary conditions.


Sign in / Sign up

Export Citation Format

Share Document