scholarly journals AVBH: Asymmetric Learning to Hash with Variable Bit Encoding

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yanduo Ren ◽  
Jiangbo Qian ◽  
Yihong Dong ◽  
Yu Xin ◽  
Huahui Chen

Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem. The AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. The hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. The query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.

Author(s):  
Agung Riyadi

The One of many way to connect to the database through the android application is using volleyball and RESTAPI. By using RestAPI, the android application does not directly connect to the database but there is an intermediary in the form of an API. In android development, Android-volley has the disadvantage of making requests from large and large data, so an evaluation is needed to test the capabilities of the Android volley. This research was conducted to test android-volley to retrieve data through RESTAPI presented in the form of an application to retrieve medicinal plant data. From the test results can be used by volley an error occurs when the back button is pressed, in this case another process is carried out if the previous volley has not been loaded. This error occurred on several android versions such as lollipops and marshmallows also on some brands of devices. So that in using android-volley developer need to check the request queue process that is carried out by the user, if the data retrieval process by volley has not been completed, it is necessary to stop the process to download data using volley so that there is no Android Not Responding (ANR) error.Keywords: Android, Volley, WP REST API, ANR Error


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


2007 ◽  
Vol 38 (7) ◽  
pp. 11-17
Author(s):  
Ronald M. Aarts

Conventionally, the ultimate goal in loudspeaker design has been to obtain a flat frequency response over a specified frequency range. This can be achieved by carefully selecting the main loudspeaker parameters such as the enclosure volume, the cone diameter, the moving mass and the very crucial “force factor”. For loudspeakers in small cabinets the results of this design procedure appear to be quite inefficient, especially at low frequencies. This paper describes a new solution to this problem. It consists of the combination of a highly non-linear preprocessing of the audio signal and the use of a so called low-force-factor loudspeaker. This combination yields a strongly increased efficiency, at least over a limited frequency range, at the cost of a somewhat altered sound quality. An analytically tractable optimality criterion has been defined and has been verified by the design of an experimental loudspeaker. This has a much higher efficiency and a higher sensitivity than current low-frequency loudspeakers, while its cabinet can be much smaller.


2021 ◽  
Vol 04 (1) ◽  
pp. 54-54
Author(s):  
V. R. Nigmatullin ◽  
◽  
I. R. Nigmatullin ◽  
R. G. Nigmatullin ◽  
A.M. Migranov ◽  
...  

Currently, to increase the efficiency of industrial production, high-performance and expensive technological equipment is increasingly used, in which the weakest link, from the point of view of efficiency and reliability, is the components and parts of heavily loaded tribo – couplings operating both at significantly different temperatures (conditionally under lighter conditions, the temperature difference can be 100-120 degrees) and climatic conditions (high humidity, the presence of abrasives and other chemical elements in the atmosphere). As the results of the analysis of the frequency of failures of friction units and, accordingly, the cost of their restoration reach 9-20 percent of the cost of all equipment, without taking into account significant losses of income (profit) of the enterprise from downtime. The solution of this problem is based on the study of the wear rate of friction units by the wear products accumulated in working oils, cooling lubricants, and greases. A digital equipment monitoring system (DSMT) has been developed and implemented, which includes dynamic recording of the number of wear products and oil temperature by original modern recording devices, followed by the technology of their processing and use. The system also includes methods for finding the necessary information in large data sets useful and necessary in theoretical and practical terms with a similar technique controlled by a digital monitoring system. The advantages of SMT are the ability to predict the reliability of the equipment; reduce production risks and significantly reduce inefficient costs.


Author(s):  
Pradeep Lall ◽  
Tony Thomas

Electronics in automotive underhood environments is used for a number of safety critical functions. Reliable continued operation of electronic safety systems without catastrophic failure is important for safe operation of the vehicle. There is need for prognostication methods, which can be integrated, with on-board sensors for assessment of accrued damage and impending failure. In this paper, leadfree electronic assemblies consisting of daisy-chained parts have been subjected to high temperature vibration at 5g and 155°C. Spectrogram has been used to identify the emergence of new low frequency components with damage progression in electronic assemblies. Principal component analysis has been used to reduce the dimensionality of large data-sets and identify patterns without the loss of features that signify damage progression and impending failure. Variance of the principal components of the instantaneous frequency has been shown to exhibit an increasing trend during the initial damage progression, attaining a maximum value and decreasing prior to failure. The unique behavior of the instantaneous frequency over the period of vibration can be used as a health-monitoring feature for identifying the impending failures in automotive electronics. Further, damage progression has been studied using Empirical Mode Decomposition (EMD) technique in order to decompose the signals into Independent Mode Functions (IMF). The IMF’s were investigated based on their kurtosis values and a reconstructed strain signal was formulated with all IMF’s greater than a kurtosis value of three. PCA analysis on the reconstructed strain signal gave better patterns that can be used for prognostication of the life of the components.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Jiuwen Cao ◽  
Zhiping Lin

Extreme learning machine (ELM) has been developed for single hidden layer feedforward neural networks (SLFNs). In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.


Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

The latest advances in network and distributedsystem technologies now allow integration of a vast variety of services with almost unlimited processing power, using large amounts of data. Sharing of resources is often viewed as the key goal for distributed systems, and in this context the sharing of stored data appears as the most important aspect of distributed resource sharing. Scientific applications are the first to take advantage of such environments as the requirements of current and future high performance computing experiments are pressing, in terms of even higher volumes of issued data to be stored and managed. While these new environments reveal huge opportunities for large-scale distributed data storage and management, they also raise important technical challenges, which need to be addressed. The ability to support persistent storage of data on behalf of users, the consistent distribution of up-to-date data, the reliable replication of fast changing datasets or the efficient management of large data transfers are just some of these new challenges. In this chapter we discuss how the existing distributed computing infrastructure is adequate for supporting the required data storage and management functionalities. We highlight the issues raised from storing data over large distributed environments and discuss the recent research efforts dealing with challenges of data retrieval, replication and fast data transfers. Interaction of data management with other data sensitive, emerging technologies as the workflow management is also addressed.


Author(s):  
Mafruz Ashrafi ◽  
David Taniar ◽  
Kate Smith

With the advancement of storage, retrieval, and network technologies today, the amount of information available to each organization is literally exploding. Although it is widely recognized that the value of data as an organizational asset often becomes a liability because of the cost to acquire and manage those data is far more than the value that is derived from it. Thus, the success of modern organizations not only relies on their capability to acquire and manage their data but their efficiency to derive useful actionable knowledge from it. To explore and analyze large data repositories and discover useful actionable knowledge from them, modern organizations have used a technique known as data mining, which analyzes voluminous digital data and discovers hidden but useful patterns from such massive digital data. However, discovery of hidden patterns has statistical meaning and may often disclose some sensitive information. As a result, privacy becomes one of the prime concerns in the data-mining research community. Since distributed data mining discovers rules by combining local models from various distributed sites, breaching data privacy happens more often than it does in centralized environments.


Author(s):  
Ido Millet

Relational databases and the current SQL standard are poorly suited to retrieval of hierarchical data. After demonstrating the problem, this chapter describes how two approaches to data denormalization can facilitate hierarchical data retrieval. Both approaches solve the problem of data retrieval, but as expected, come at the cost of difficult and potentially inconsistent data updates. This chapter then describes how we can address these update-related shortcomings via back-end (triggers) logic. Using a proper combination of denormalized data structure and back-end logic, we can have the best of both worlds: easy data retrieval and simple, consistent data updates.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1193
Author(s):  
Shaochen Jiang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Anyu Du ◽  
Yongming Li

The existing learning-based unsupervised hashing method usually uses a pre-trained network to extract features, and then uses the extracted feature vectors to construct a similarity matrix which guides the generation of hash codes through gradient descent. Existing research shows that the algorithm based on gradient descent will cause the hash codes of the paired images to be updated toward each other’s position during the training process. For unsupervised training, this situation will cause large fluctuations in the hash code during training and limit the learning efficiency of the hash code. In this paper, we propose a method named Deep Unsupervised Hashing with Gradient Attention (UHGA) to solve this problem. UHGA mainly includes the following contents: (1) use pre-trained network models to extract image features; (2) calculate the cosine distance of the corresponding features of the pair of images, and construct a similarity matrix through the cosine distance to guide the generation of hash codes; (3) a gradient attention mechanism is added during the training of the hash code to pay attention to the gradient. Experiments on two existing public datasets show that our proposed method can obtain more discriminating hash codes.


Sign in / Sign up

Export Citation Format

Share Document