On compact representations of propositional circumscription

Author(s):  
Marco Cadoli ◽  
Francesco M. Donini ◽  
Marco Schaerf
Author(s):  
Johannes J. Brust ◽  
Zichao Di ◽  
Sven Leyffer ◽  
Cosmin G. Petra

1991 ◽  
Vol 3 (2) ◽  
pp. 213-225 ◽  
Author(s):  
John Platt

We have created a network that allocates a new computational unit whenever an unusual pattern is presented to the network. This network forms compact representations, yet learns easily and rapidly. The network can be used at any time in the learning process and the learning patterns do not have to be repeated. The units in this network respond to only a local region of the space of input values. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated that corrects the response to the presented pattern. If the network performs well on a presented pattern, then the network parameters are updated using standard LMS gradient descent. We have obtained good results with our resource-allocating network (RAN). For predicting the Mackey-Glass chaotic time series, RAN learns much faster than do those using backpropagation networks and uses a comparable number of synapses.


Author(s):  
Yue Li ◽  
Yan Yi ◽  
Dong Liu ◽  
Li Li ◽  
Zhu Li ◽  
...  

To reduce the redundancy among different color channels, e.g., YUV, previous methods usually adopt a linear model that tends to be oversimple for complex image content. We propose a neural-network-based method for cross-channel prediction in intra frame coding. The proposed network utilizes twofold cues, i.e., the neighboring reconstructed samples with all channels, and the co-located reconstructed samples with partial channels. Specifically, for YUV video coding, the neighboring samples with YUV are processed by several fully connected layers; the co-located samples with Y are processed by convolutional layers; and the proposed network fuses the twofold cues. We observe that the integration of twofold information is crucial to the performance of intra prediction of the chroma components. We have designed the network architecture to achieve a good balance between compression performance and computational efficiency. Moreover, we propose a transform domain loss for the training of the network. The transform domain loss helps obtain more compact representations of residues in the transform domain, leading to higher compression efficiency. The proposed method is plugged into HEVC and VVC test models to evaluate its effectiveness. Experimental results show that our method provides more accurate cross-channel intra prediction compared with previous methods. On top of HEVC, our method achieves on average 1.3%, 5.4%, and 3.8% BD-rate reductions for Y, Cb, and Cr on common test sequences, and on average 3.8%, 11.3%, and 9.0% BD-rate reductions for Y, Cb, and Cr on ultra-high-definition test sequences. On top of VVC, our method achieves on average 0.5%, 1.7%, and 1.3% BD-rate reductions for Y, Cb, and Cr on common test sequences.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Francis Corson ◽  
Eric D Siggia

Models of cell function that assign a variable to each gene frequently lead to systems of equations with many parameters whose behavior is obscure. Geometric models reduce dynamics to intuitive pictorial elements that provide compact representations for sparse in vivo data and transparent descriptions of developmental transitions. To illustrate, a geometric model fit to vulval development in Caenorhabditis elegans, implies a phase diagram where cell-fate choices are displayed in a plane defined by EGF and Notch signaling levels. This diagram defines allowable and forbidden cell-fate transitions as EGF or Notch levels change, and explains surprising observations previously attributed to context-dependent action of these signals. The diagram also reveals the existence of special points at which minor changes in signal levels lead to strong epistatic interactions between EGF and Notch. Our model correctly predicts experiments near these points and suggests specific timed perturbations in signals that can lead to additional unexpected outcomes.


10.37236/2473 ◽  
2013 ◽  
Vol 20 (1) ◽  
Author(s):  
Paweł Baturo ◽  
Marcin Piątkowski ◽  
Wojciech Rytter

We investigate some repetition problems for a very special class $\mathcal{S}$ of strings called the standard Sturmian words, which  have very compact representations in terms of sequences of integers. Usually the size of this word is exponential with respect to the size of its integer sequence, hence we are dealing with repetition problems in compressed strings. An explicit formula is given for the number $\rho(w)$ of runs in a standard word $w$. We show that $\rho(w)/|w|\le 4/5$ for each $w\in S$, and  there is an infinite sequence of strictly growing words $w_k\in {\mathcal{S}}$ such that $\lim_{k\rightarrow \infty} \frac{\rho(w_k)}{|w_k|} = \frac{4}{5}$. Moreover, we show how to compute the number of runs in a standard Sturmian word in linear time with respect to the size of its compressed representation.


2019 ◽  
Vol 66 ◽  
pp. 197-223
Author(s):  
Michal Jozef Knapik ◽  
Etienne Andre ◽  
Laure Petrucci ◽  
Wojciech Jamroga ◽  
Wojciech Penczek

In this paper we investigate the Timed Alternating-Time Temporal Logic (TATL), a discrete-time extension of ATL. In particular, we propose, systematize, and further study semantic variants of TATL, based on different notions of a strategy. The notions are derived from different assumptions about the agents’ memory and observational capabilities, and range from timed perfect recall to untimed memoryless plans. We also introduce a new semantics based on counting the number of visits to locations during the play. We show that all the semantics, except for the untimed memoryless one, are equivalent when punctuality constraints are not allowed in the formulae. In fact, abilities in all those notions of a strategy collapse to the “counting” semantics with only two actions allowed per location. On the other hand, this simple pattern does not extend to the full TATL. As a consequence, we establish a hierarchy of TATL semantics, based on the expressivity of the underlying strategies, and we show when some of the semantics coincide. In particular, we prove that more compact representations are possible for a reasonable subset of TATL specifications, which should improve the efficiency of model checking and strategy synthesis.


1995 ◽  
Vol 7 (2) ◽  
pp. 270-279 ◽  
Author(s):  
Dimitri P. Bertsekas

Sutton's TD(λ) method aims to provide a representation of the cost function in an absorbing Markov chain with transition costs. A simple example is given where the representation obtained depends on λ. For λ = 1 the representation is optimal with respect to a least-squares error criterion, but as λ decreases toward 0 the representation becomes progressively worse and, in some cases, very poor. The example suggests a need to understand better the circumstances under which TD(0) and Q-learning obtain satisfactory neural network-based compact representations of the cost function. A variation of TD(0) is also given, which performs better on the example.


Sign in / Sign up

Export Citation Format

Share Document