compact representation
Recently Published Documents


TOTAL DOCUMENTS

501
(FIVE YEARS 156)

H-INDEX

22
(FIVE YEARS 4)

2022 ◽  
Vol 41 (2) ◽  
pp. 1-15
Author(s):  
Chuankun Zheng ◽  
Ruzhang Zheng ◽  
Rui Wang ◽  
Shuang Zhao ◽  
Hujun Bao

In this article, we introduce a compact representation for measured BRDFs by leveraging Neural Processes (NPs). Unlike prior methods that express those BRDFs as discrete high-dimensional matrices or tensors, our technique considers measured BRDFs as continuous functions and works in corresponding function spaces . Specifically, provided the evaluations of a set of BRDFs, such as ones in MERL and EPFL datasets, our method learns a low-dimensional latent space as well as a few neural networks to encode and decode these measured BRDFs or new BRDFs into and from this space in a non-linear fashion. Leveraging this latent space and the flexibility offered by the NPs formulation, our encoded BRDFs are highly compact and offer a level of accuracy better than prior methods. We demonstrate the practical usefulness of our approach via two important applications, BRDF compression and editing. Additionally, we design two alternative post-trained decoders to, respectively, achieve better compression ratio for individual BRDFs and enable importance sampling of BRDFs.


2022 ◽  
Author(s):  
Wenshuo Guo ◽  
Fang-Wei Fu

Abstract This paper presents a new technique for disturbing the algebraic structure of linear codes in code-based cryptography. Specifically, we introduce the so-called semilinear transformations in coding theory and then apply them to the construction of code-based cryptosystems. Note that Fqm can be viewed as an Fq -linear space of dimension m , a semilinear transformation φ is therefore defined as an Fq -linear automorphism of Fqm . Then we impose this transformation to a linear code C over Fqm . It is clear that φ (C) forms an Fq -linear space, but generally does not preserve the Fqm -linearity any longer. Inspired by this observation, a new technique for masking the structure of linear codes is developed in this paper. Meanwhile, we endow the underlying Gabidulin code with the so-called partial cyclic structure to reduce the public-key size. Compared to some other code-based cryptosystems, our proposal admits a much more compact representation of public keys. For instance, 2592 bytes are enough to achieve the security of 256 bits, almost 403 times smaller than that of Classic McEliece entering the third round of the NIST PQC project.


Algorithmica ◽  
2022 ◽  
Author(s):  
Boris Klemz ◽  
Günter Rote

AbstractA bipartite graph $$G=(U,V,E)$$ G = ( U , V , E ) is convex if the vertices in V can be linearly ordered such that for each vertex $$u\in U$$ u ∈ U , the neighbors of u are consecutive in the ordering of V. An induced matchingH of G is a matching for which no edge of E connects endpoints of two different edges of H. We show that in a convex bipartite graph with n vertices and mweighted edges, an induced matching of maximum total weight can be computed in $$O(n+m)$$ O ( n + m ) time. An unweighted convex bipartite graph has a representation of size O(n) that records for each vertex $$u\in U$$ u ∈ U the first and last neighbor in the ordering of V. Given such a compact representation, we compute an induced matching of maximum cardinality in O(n) time. In convex bipartite graphs, maximum-cardinality induced matchings are dual to minimum chain covers. A chain cover is a covering of the edge set by chain subgraphs, that is, subgraphs that do not contain induced matchings of more than one edge. Given a compact representation, we compute a representation of a minimum chain cover in O(n) time. If no compact representation is given, the cover can be computed in $$O(n+m)$$ O ( n + m ) time. All of our algorithms achieve optimal linear running time for the respective problem and model, and they improve and generalize the previous results in several ways: The best algorithms for the unweighted problem versions had a running time of $$O(n^2)$$ O ( n 2 ) (Brandstädt et al. in Theor. Comput. Sci. 381(1–3):260–265, 2007. 10.1016/j.tcs.2007.04.006). The weighted case has not been considered before.


Games ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 2
Author(s):  
Valeria Zahoransky ◽  
Julian Gutierrez ◽  
Paul Harrenstein ◽  
Michael Wooldridge

We introduce a non-cooperative game model in which players’ decision nodes are partially ordered by a dependence relation, which directly captures informational dependencies in the game. In saying that a decision node v is dependent on decision nodes v1,…,vk, we mean that the information available to a strategy making a choice at v is precisely the choices that were made at v1,…,vk. Although partial order games are no more expressive than extensive form games of imperfect information (we show that any partial order game can be reduced to a strategically equivalent extensive form game of imperfect information, though possibly at the cost of an exponential blowup in the size of the game), they provide a more natural and compact representation for many strategic settings of interest. After introducing the game model, we investigate the relationship to extensive form games of imperfect information, the problem of computing Nash equilibria, and conditions that enable backwards induction in this new model.


2021 ◽  
Author(s):  
Philippe Nivlet ◽  
Yunlai Yang ◽  
Arturo Magana-Mora ◽  
Mahmoud Abughaban ◽  
Ayodeji Abegunde

Abstract Overpressure refers to the abnormally high subsurface pressure that may exceed hydrostatic pressure at a given depth. Its characterization is an important part of subsurface characterization as it allows to complete drilling operations in a safe and optimal way. In dolomitic formations, however, the prediction of such overpressure is especially challenging because of (1) the high degree of lateral variability of the formations, (2) the limited effect of overpressure on tight rocks elastic parameters, and (3) the complexity of physical processes involved to form overpressure. In addition to these factors, existing experimental models generally used to relate elastic parameters to pressure are often not well calibrated to carbonate rocks. The alternative to existing purely physical approaches is a data-driven model that leverages data from offset wells. We show that due to the complexity of the characterization question to be solved, an end-to-end machine learning based approach is deemed to fail. Instead of a fully automated approach, we show a semi-supervised workflow that integrates seismic, geological data, and overpressure observations from previously drilled wells to map overpressure regions. Attribute maps are first extracted from a 3D seismic data set in an overpressured geological formation of interest. An auto-encoder is then used to learn a more compact representation of data, resulting in a reduced number of latent attributes. Then, a hand-tailored semi-supervised approach is applied, which is a combination of clustering method (here based on DBSCAN algorithm) and Bayesian classification to determine overpressure risk degree (no risk, mild, or high risk). The approach described in this study is compared to direct end-to-end models and significantly outperforms them with an error on a blind well prediction of around 25%. The overpressure probability maps resulting from the models can be used later for the optimization of drilling processes and to reduce drilling hazards.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3106
Author(s):  
Tingting Han ◽  
Yuankai Qi ◽  
Suguo Zhu

Video compact representation aims to obtain a representation that could reflect the kernel mode of video content and concisely describe the video. As most information in complex videos is either noisy or redundant, some researchers have instead focused on long-term video semantics. Recent video compact representation methods heavily rely on the segmentation accuracy of video semantics. In this paper, we propose a novel framework to address these challenges. Specifically, we designed a novel continuous video semantic embedding model to learn the actual distribution of video words. First, an embedding model based on the continuous bag of words method is proposed to learn the video embeddings, integrated with a well-designed discriminative negative sampling approach, which helps emphasize the convincing clips in the embedding while weakening the influence of the confusing ones. Second, an aggregated distribution pooling method is proposed to capture the semantic distribution of kernel modes in videos. Finally, our well-trained model can generate compact video representations by direct inference, which provides our model with a better generalization ability compared with those of previous methods. We performed extensive experiments on event detection and the mining of representative event parts. Experiments on TRECVID MED11 and CCV datasets demonstrated the effectiveness of our method. Our method could capture the semantic distribution of kernel modes in videos and shows powerful potential to discover and better describe complex video patterns.


Author(s):  
Guillermo M. Álamo ◽  
Luis A. Padrón ◽  
Juan J. Aznárez ◽  
Orlando Maeso

AbstractThis paper presents a three–dimensional linear numerical model for the dynamic and seismic analysis of pile-supported structures that allows to represent simultaneously the structures, pile foundations, soil profile and incident seismic waves and that, therefore, takes directly into account structure–pile–soil interaction. The use of advanced Green’s functions to model the dynamic behaviour of layered soils, not only leads to a very compact representation of the problem and a simplification in the preparation of the data files (no meshes are needed for the soil), but also allows to take into account arbitrarily complex soil profiles and problems with large numbers of elements. The seismic excitation is implemented as incident planar body waves (P or S) propagating through the layered soil from an infinitely–distant source and impinging on the site with any generic angle of incidence. The response of the system is evaluated in the frequency domain, and seismic results in time domain are then obtained using the frequency–domain method through the use of the Fast Fourier Transform. An application example using a pile-supported structure is presented in order to illustrate the capabilities of the model. Piles and columns are modelled through Timoshenko beam elements, and slabs, pile caps and shear walls are modelled using shell finite elements, so that the real flexibility of all elements can be rigorously taken into account. This example is also used to explore the influence of soil profile and angle of incidence on different variables of interest in earthquake engineering.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Author(s):  
Cosimo Aliani ◽  
Eva Rossi ◽  
Piergiorgio Francia ◽  
Leonardo Bocchi

Abstract Objective:Vascular ageing is associated with several alterations, including arterial stiffness and endothelial dysfunction. Such alterations represent an independent factor in the development of cardiovascular disease. In our previous works we demonstrated the alterations occurring in the vascular system are themselves reflected in the shape of the peripheral waveform; thus, a model that describes the waveform as a sum of Gaussian curves provides a set of parameters that successfully discriminate between under(<= 35 years old) and over subjects (> 35 years old). In the present work, we explored the feasibility of a new decomposition model, based on a sum of exponential pulses, applied to the same problem. Approach: The first processing step extracts each pulsation from the input signal and removes the long-term trend using a cubic spline with nodes between consecutive pulsations. After that, a Least Squares fitting algorithm determines the set of optimal model parameters that best approximates each single pulse. The vector of model parameters gives a compact representation of the pulse waveform that constitutes the basis for the classification step. Each subject is associated to his/her "representative" pulse waveform, obtained by averaging the vector parameters corresponding to all pulses. Finally, a Bayesan classifier has been designed to discriminate the waveforms of under and over subjects, using the leave-one-subject-out validation method. Main results: Results indicate that the fitting procedure reaches a rate of 96% in under subjects and 95% in over subjects and that the Bayesan classifier is able to correctly classify 91\% of the subjects with a specificity of 94% and a sensibility of 84%. Significance: This study shows a sensible vascular age estimation accuracy with a multi-exponential model, which may help to predict cardiovascular diseases.


Sign in / Sign up

Export Citation Format

Share Document