weight space
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 19)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Sriram Srinivasan ◽  
Charles Dickens ◽  
Eriq Augustine ◽  
Golnoosh Farnadi ◽  
Lise Getoor

AbstractStatistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251329
Author(s):  
Ninnart Fuengfusin ◽  
Hakaru Tamukoh

In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {−1,1}, ternary {−1,0,1}, and 32-bit floating-point. We further developed the MPWN from both software and hardware aspects. From the software aspect, we evaluated the MPWN on the Fashion-MNIST and CIFAR10 datasets. We systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, we proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, we also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, we synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. Our proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional 32-bit floating-point model. In addition, our implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanjun Dai ◽  
Lin Su

In this article, an in-depth study and analysis of economic structure are carried out using a neural network fusion release algorithm. The method system defines the weight space and structure space of neural networks from the perspective of optimization theory, proposes a bionic optimization algorithm under the weight space and structure space, and establishes a neuroevolutionary method with shallow neural network and deep neural network as the research objects. In the shallow neuroevolutionary, the improved genetic algorithm (IGA) based on elite heuristic operation and migration strategy and the improved coyote optimization algorithm (ICOA) based on adaptive influence weights are proposed, and the shallow neuroevolutionary method based on IGA and the shallow neuroevolutionary method based on ICOA are applied to the weight space of backpropagation (BP) neural networks. In deep neuroevolutionary method, the structure space of convolutional neural network is proposed to solve the search space design of neural structure search (NAS), and the GA-based deep neuroevolutionary method under the structure space of convolutional neural network is proposed to solve the problem that numerous hyperparameters and network structure parameters can produce explosive combinations when designing deep learning models. The neural network fusion bionic algorithm used has the application value of exploring the spatial structure and dynamics of the socioeconomic system, improving the perception of the socioeconomic situation, and understanding the development law of society, etc. The idea is also verifiable through the present computer technology.


2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Aleksander J. Cianciara ◽  
S. James Gates ◽  
Yangrui Hu ◽  
Renée Kirk

Abstract A conjecture is made that the weight space for 4D, $$ \mathcal{N} $$ N -extended supersymmetrical representations is embedded within the permutahedra associated with permutation groups 𝕊d. Adinkras and Coxeter Groups associated with minimal representations of 4D, $$ \mathcal{N} $$ N = 1 supersymmetry provide evidence supporting this conjecture. It is shown that the appearance of the mathematics of 4D, $$ \mathcal{N} $$ N = 1 minimal off-shell supersymmetry representations is equivalent to solving a four color problem on the truncated octahedron. This observation suggest an entirely new way to approach the off-shell SUSY auxiliary field problem based on IT algorithms probing the properties of 𝕊d.


2021 ◽  
Vol 118 (9) ◽  
pp. e2015617118
Author(s):  
Yu Feng ◽  
Yuhai Tu

Despite tremendous success of the stochastic gradient descent (SGD) algorithm in deep learning, little is known about how SGD finds generalizable solutions at flat minima of the loss function in high-dimensional weight space. Here, we investigate the connection between SGD learning dynamics and the loss function landscape. A principal component analysis (PCA) shows that SGD dynamics follow a low-dimensional drift–diffusion motion in the weight space. Around a solution found by SGD, the loss function landscape can be characterized by its flatness in each PCA direction. Remarkably, our study reveals a robust inverse relation between the weight variance and the landscape flatness in all PCA directions, which is the opposite to the fluctuation–response relation (aka Einstein relation) in equilibrium statistical physics. To understand the inverse variance–flatness relation, we develop a phenomenological theory of SGD based on statistical properties of the ensemble of minibatch loss functions. We find that both the anisotropic SGD noise strength (temperature) and its correlation time depend inversely on the landscape flatness in each PCA direction. Our results suggest that SGD serves as a landscape-dependent annealing algorithm. The effective temperature decreases with the landscape flatness so the system seeks out (prefers) flat minima over sharp ones. Based on these insights, an algorithm with landscape-dependent constraints is developed to mitigate catastrophic forgetting efficiently when learning multiple tasks sequentially. In general, our work provides a theoretical framework to understand learning dynamics, which may eventually lead to better algorithms for different learning tasks.


2021 ◽  
Author(s):  
Phillip Swazinna ◽  
Steffen Udluft ◽  
Daniel Hein ◽  
Thomas Runkler

2020 ◽  
Vol 12 (6) ◽  
pp. 1-15 ◽  
Author(s):  
Jia-Ning Guo ◽  
Jian Zhang ◽  
Yan-Yu Zhang ◽  
Gang Xin ◽  
Lin Li

SAGE Open ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 215824402097507
Author(s):  
Yue Qi ◽  
Xiaolin Li

Sustainable investment is typically fulfilled by screening of environmental, social, and governance (ESG); the screening strategies are practical and expedite sustainable-investment development. However, the strategies typically build portfolios by a list of good stocks and ignore portfolio completeness. Moreover, there has been limited literature to study the portfolio weights of sustainable investment in the weight space. In such an area, this article contributes to the literature as follows: We extend a conventional portfolio-selection model and impose ESG constraints. We analytically solve our model by computing the efficient frontier and prove that the frontier’s portfolio weights all lie on a ray (half line). By the ray structure, we prove that portfolio selection for sustainable investment and conventional portfolio selection fundamentally possess highly different portfolio weights. Overall, our aim is comparing the portfolio weights of sustainable portfolio selection and of conventional portfolio selection; the comparison result has been unknown until now. The result is important for sustainable investment because portfolio weights are the foundation of portfolio selection and investments. We sample the component stocks of Dow Jones Industrial Average Index from 2004 to 2013 and find that our efficient frontier and the conventional efficient frontier are quite similar. Therefore, in plain financial language, investors can still obtain risk-return performance similar to conventional portfolio selection after imposing strong ESG requirements, although the portfolio weights can be totally different. The result is both an endorsement and a reminder for sustainable investment.


Sign in / Sign up

Export Citation Format

Share Document