scholarly journals A comparison of vector symbolic architectures

Author(s):  
Kenny Schlegel ◽  
Peer Neubert ◽  
Peter Protzel

AbstractVector Symbolic Architectures combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. Over the past years, several VSA implementations have been proposed. The available implementations differ in the underlying vector space and the particular implementations of the VSA operators. This paper provides an overview of eleven available VSA implementations and discusses their commonalities and differences in the underlying vector space and operators. We create a taxonomy of available binding operations and show an important ramification for non self-inverse binding operations using an example from analogical reasoning. A main contribution is the experimental comparison of the available implementations in order to evaluate (1) the capacity of bundles, (2) the approximation quality of non-exact unbinding operations, (3) the influence of combining binding and bundling operations on the query answering performance, and (4) the performance on two example applications: visual place- and language-recognition. We expect this comparison and systematization to be relevant for development of VSAs, and to support the selection of an appropriate VSA for a particular task. The implementations are available.

2021 ◽  
Vol 36 ◽  
pp. 01014
Author(s):  
Fung Yuen Chin ◽  
Yong Kheng Goh

Feature selection is a process of selecting a group of relevant features by removing unnecessary features for use in constructing the predictive model. However, high dimensional data increases the difficulty of feature selection due to the curse of dimensionality. From the past research, the performance of the predictive model is always compared with the existing results. When attempting to model a new dataset, the current practice is to benchmark for the dataset obtained by including all the features, including redundant features and noise. Here we propose a new optimal baseline for the dataset by mean of ranked features using a mutual information score. The quality of a dataset depends on the information contained in the dataset, and the more information contains in the dataset, the better the performance of the predictive model. The number of features to achieve this new optimal baseline will be obtained at the same time, and serve as the guideline on the number of features needed in a feature selection method. We will also show some experimental results that the proposed method provides a better baseline with fewer features compared to the existing benchmark using all the features.


Author(s):  
D. Zawieska ◽  
J. Markiewicz ◽  
M. Łuba

<p><strong>Abstract.</strong> In the community historical objects play the role of witnesses of the past history. This creates an obligation to preserve and reconstruct them for future generations. Photogrammetric methods have been applied for those purposes for many years. In the process of development of inventory documentation, the key aspects related to the selection of appropriate measuring methods for particular objects and the creation of appropriate working conditions. At present, digital measuring techniques allow developing 3D photogrammetric documentation which is particularly valuable both, for conservators of historical objects, as well as for creating virtual museums. Particular attention should be paid to the utilisation of macro photography for that purpose which allows for recreating small fragments of historical details. The objective of this paper is to present possible use of macro photography for inventory of historical patterns engraved in brick walls of one of the cellars of the Royal Castle in Warsaw (Poland); they are called engravings or house marks. The cellar walls were made of bricks (20<span class="thinspace"></span>&amp;times;<span class="thinspace"></span>10<span class="thinspace"></span>cm) on the stone foundations, where a prison was located in the 17th century. Prisoners left their drawings of signs and crests. Bricks are destroyed, some of them are moss-grown, so many engravings are hardly visible and their depths vary between 3 and 5<span class="thinspace"></span>mm. The Canon 5D Mark II camera with the 50 mm macro lens was used to inventory engravings together with the shadow-free flash, mounted on the lens and a special frame with bolts, being the photogrammetric control network. To ensure the high quality of the 3D model, a network of photographs were acquired from two different distances; they were processed with the use of SfM/MVS algorithms implemented in Agisoft PhotoScan software. The aim of this paper is to discusses the impact of selection of control points on the accuracy of the orientation process, the impact of the point cloud density on correct projection of the digital surface, the influence of the DSM resolution on details of projection of shapes and selection of orthorectification and mosaicking parameters on the accuracy of orthoimage generation.</p>


2005 ◽  
Vol 20 (3) ◽  
pp. 215-240 ◽  
Author(s):  
RAMON LOPEZ DE MANTARAS ◽  
DAVID MCSHERRY ◽  
DEREK BRIDGE ◽  
DAVID LEAKE ◽  
BARRY SMYTH ◽  
...  

Case-based reasoning (CBR) is an approach to problem solving that emphasizes the role of prior experience during future problem solving (i.e., new problems are solved by reusing and if necessary adapting the solutions to similar problems that were solved in the past). It has enjoyed considerable success in a wide variety of problem solving tasks and domains. Following a brief overview of the traditional problem-solving cycle in CBR, we examine the cognitive science foundations of CBR and its relationship to analogical reasoning. We then review a representative selection of CBR research in the past few decades on aspects of retrieval, reuse, revision and retention.


2014 ◽  
Vol 24 (1) ◽  
pp. 123-131
Author(s):  
Simon Gangl ◽  
Domen Mongus ◽  
Borut Žalik

Abstract Systems based on principal component analysis have developed from exploratory data analysis in the past to current data processing applications which encode and decode vectors of data using a changing projection space (eigenspace). Linear systems, which need to be solved to obtain a constantly updated eigenspace, have increased significantly in their dimensions during this evolution. The basic scheme used for updating the eigenspace, however, has remained basically the same: (re)computing the eigenspace whenever the error exceeds a predefined threshold. In this paper we propose a computationally efficient eigenspace updating scheme, which specifically supports high-dimensional systems from any domain. The key principle is a prior selection of the vectors used to update the eigenspace in combination with an optimized eigenspace computation. The presented theoretical analysis proves the superior reconstruction capability of the introduced scheme, and further provides an estimate of the achievable compression ratios.


2005 ◽  
Vol 51 (9) ◽  
pp. 81-90 ◽  
Author(s):  
P. Cooper

The paper reviews the development of the vertical flow (VF) reed beds/constructed wetlands over the past 20 years. The performance of VF systems (and their use within hybrid systems) is analysed by reference to a number of brief case studies. The oxygen transfer rate (OTR) achieved is absolutely critical to the sizing of the systems. The author reviews the reported OTRs and comments on the existing design equations proposed for calculation of the area of beds. The 1st generation of VF systems used a set of parallel beds that were dosed one at a time in rotation and then rested for a period of days because there was considerable concern (based on early experience) that they would become clogged. In the past 10 years a number of new designs of 2nd generation VF beds have been built which make use of a single bed and hence operate without any resting periods. The hydraulic loading rate and the selection of the bed media, which are critical to the design and hence successful operation of these 2nd generation compact VF beds, are described. It is now possible to produce a very high quality of effluent from VF beds alone sized at 2 m2/pe when treating domestic sewage.


1928 ◽  
Author(s):  
◽  
Cleora Eleanor Johnson

The laundering of clothing and household articles is an ever present necessity of the homemaker. In the past, practically all of the laundry work was done in the home and generally by the housewife herself. The time came when this task was gradually transferred to a laudress or to a commercial laundry. Paralleling the trend toward commercial laundries, the invention of labor saving devices, such as power washers, irons, and mangles, has again made laundering an important home task.


2018 ◽  
Vol 64 (1) ◽  
pp. 24-32
Author(s):  
Igor Štefančík ◽  
Michal Bošeľa ◽  
Rudolf Petráš

AbstractValue production is one of the most important information for comparing different management strategies in forestry. Although the value production of forest stands is affected by various factors (stem and assortment quality, stem dimension, stem injury, price of assortments), thinning can be considered as one of the most important one. This paper aims at the evaluation of qualitative and value production in homogeneous beech stands, which were managed by two different thinning types for period of 45 to 55 years: (i) – heavy thinning from below (C grade according to the German forest research institutes released in 1902) and (ii) – Štefančík´s free-crown thinning. The third variant was control (iii) – subplot with no interventions. Silvicultural quality characteristics of the lower half of the stem were assessed using a 4-class scale (A – the best quality, D – the worst quality). Assortment structure (commercial quality) was estimated for each stem by an assortment model developed in the past. Nearly 3,000 individual trees aged from 83 to 105 years from 23 subplots established across the Slovakia territory were assessed. The highest volume of the best silvicultural quality of stems (A class) has been reached in forests where Štefančík´s free-crown thinning was applied (57 – 85%) while the lowest (22 – 56%) on subplots with no management. The proportion of two best commercial quality assortments (I + II) was highest in forests managed by heavy thinning from below (21 – 29%) and the lowest when no treatment was applied (7 – 19%). The highest value production (expressed in € ha−1) was reached in the forests treated by free-crown thinning. Results suggested the overall positive impact of thinning on the increase of value production in beech forests. Particularly, the free-crown thinning focusing on selection of best quality trees should be preferred as it leads, besides its sufficient value production, to a higher vertical differentiation of the beech forests.


Author(s):  
Vladimir Sergeevich Gorban

This article explores the problem of development of methodological framework of source criticism in the area of history of political and legal doctrines (history of philosophy of law). Deficit of the related developments in both, national and foreign legal literature, has a highly negative effect on the quality of selection of the source research material and formulation of valid scientific conclusions that allow conducting historical-philosophical and problematic-theoretical reconstructions of legal and political ideas of the past and modernity in a proper way. The scientific novelty of this work consists in substantiation of scientific importance and possibilities of practical application of such relevant vector of legal methodology as the methodology of source criticism in the area of philosophy of law (history of political and legal doctrines), which is interpreted not only as a set of instrumental cognitive acts, but also as a combination of principles and techniques of ensuring veracity of the content, concept and purpose of legal and political ideas of the past and modernity.


2009 ◽  
Vol 25 (02) ◽  
pp. 124-133 ◽  
Author(s):  
Jonathan Plumb ◽  
Bruce Campbell ◽  
Georgios Lyratzopoulos

Objectives:Technology assessment systems for interventional procedures (including surgical operations, minimally invasive procedures, and others) have lagged behind those for pharmaceutical treatments. Such systems have been introduced in some countries during the past decade amid debate about how they should be organized, but there is no collated information about where they exist or how they work. This study was designed to provide hitherto unavailable information about the existence, organization, methods, and outputs of systems aimed at influencing the use of interventional procedures in different countries.Methods:Data were gathered from a questionnaire survey of key informers associated with healthcare technology assessment (HTA) organizations in different countries.Results:Responses were received from key informers working for twenty-eight HTA organizations in twenty-five countries (response rate 83 percent). Information about a national system for assessing interventional procedures was obtained for fifteen countries. There was substantial variability in the type and funding of these organizations, the systems used for the selection of procedures, the types and sources of evidence used, the personnel involved in the appraisal of the evidence, the arrangements for consultation on the draft assessment, the format of assessment recommendations, the status of the guidance, and the use of guidance from other countries.Conclusion:Guidance on interventional procedures is produced variably in different countries—and not at all in some. Greater international collaboration in the assessment of new interventional procedures could help to optimize the efficiency of existing systems as well as the quality of the assessments, by capitalizing on the outputs from scarce (international) resources and expertise.


2014 ◽  
Vol 626 ◽  
pp. 58-64 ◽  
Author(s):  
G.R. Rajesh ◽  
A. Shajin Nargunam

This paper presents an algorithm for hiding information’s in raw video steams using art of steganography using discrete wavelet transform. While mostly applied to still images in the past, it has become very popular for video streams recently. When steganographic methods are applied to digital video streams, the selection of target pixels, which are used to store the secret data, is especially crucial for an effective and successful-embedding process; if pixels are not selected carefully, undesired spatial and temporal perception problems occur in the stegno-video. Typically, an irrecoverable steganography algorithm is the algorithm that makes it hard for malicious third parties to discover how it works and how to recover the secret data out of the carrier file. In this paper, a new embedding algorithm is proposed to hide the secret data in moving videos. The 2D-DCT of the video is taken and the secret message is embedded. The performance measures are evaluated for the quality of the video after the data hiding and show good results.


Sign in / Sign up

Export Citation Format

Share Document