A Quantitative Quality Measurement for Codebook in Feature Encoding Strategies

Author(s):  
Yuki Shinomiya ◽  
◽  
Yukinobu Hoshino

Nowadays, a feature encoding strategy is a general approach to represent a document, an image or audio as a feature vector. In image recognition problems, this approach treats an image as a set of partial feature descriptors. The set is then converted to a feature vector based on basis vectors called codebook. This paper focuses on a prior probability, which is one of codebook parameters and analyzes dependency for the feature encoding. In this paper, we conducted the following two experiments, analysis of prior probabilities in state-of-the-art encodings and control of prior probabilities. The first experiment investigates the distribution of prior probabilities and compares recognition performances of recent techniques. The results suggest that recognition performance probably depends on the distribution of prior probabilities. The second experiment tries further statistical analysis by controlling the distribution of prior probabilities. The results show a strong negative linear relationship between a standard deviation of prior probabilities and recognition accuracy. From these experiments, the quality of codebook used for feature encoding can be quantitatively measured, and recognition performances can be improved by optimizing codebook. Besides, the codebook is created at an offline step. Therefore, optimizing codebook does not require any additional computational cost for practical applications.

1992 ◽  
Vol 10 (1) ◽  
pp. 73-81 ◽  
Author(s):  
Mariko Mikumo

In this experiment, strategies of pitch encoding in the processing of melodies were investigated. Twenty-six students who were highly trained in music and twenty-six who were less well trained were instructed to make recognition judgments concerning melodies after a 12-sec retention interval. During each retention interval, subjects were exposed to one of four conditions (pause, listening to an interfering melody, shadowing nonsense syllables, and shadowing note names). Both the standard and the comparison melodies were six-tone series that had either a high- tonality structure ("tonal melody") or a low-tonality structure ("atonal melody"). The results (obtained by Newman-Keuls method) showed that recognition performance for the musically highly trained group was severely disrupted by the note names for the tonal melodies, while it was disrupted by the interfering melody for the atonal melodies. On the other hand, for the musically less well trained group, whose recognition performance was significantly worse than that of the highly trained group even in the Pause condition, there were no significant differences in disruptive effects between the different types of interfering materials. These findings suggest that the highly trained group could use a verbal (note name) encoding strategy for the pitches in the tonal melodies, and also rehearsal strategies (such as humming and whistling) for the atonal melodies, but that subjects in the less well trained group were unable to use any effective strategies to encode the melodies.


1990 ◽  
Vol 18 (1_part_1) ◽  
pp. 41-50
Author(s):  
F. Barbara Orlans

Pain scales classify the severity of pain inflicted on laboratory animals from little or none up to severe. A pain scale as part of public policy serves beneficial purposes that promote animal welfare. It can be used to educate people about the two alternatives of refinement and replacement, and the need to reduce animal pain. Furthermore, a pain scale has practical applications: 1) in review procedures for animal welfare concerns; 2) in developing policies on the use of animals in education; and 3) as a basis for collecting national data on animal experimentation, so that meaningful data can be collected on trends in reduction and control in animal pain. So far, only a few countries (including Sweden, the Netherlands, Canada and New Zealand) have adopted pain scales as part of their public policy. Most countries, including the United States, have not yet done so. The history of the development and adoption of pain scales by various countries is described and the case is presented for wider adoption of a pain scale in countries not currently using one.


Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2016 ◽  
Vol 27 (07) ◽  
pp. 1650082 ◽  
Author(s):  
Xiao Jia ◽  
Jin-Song Hong ◽  
Ya-Chun Gao ◽  
Hong-Chun Yang ◽  
Chun Yang ◽  
...  

We investigate the percolation phase transitions in both the static and growing networks where the nodes are sampled according to a weighted function with a tunable parameter [Formula: see text]. For the static network, i.e. the number of nodes is constant during the percolation process, the percolation phase transition can evolve from continuous to discontinuous as the value of [Formula: see text] is tuned. Based on the properties of the weighted function, three typical values of [Formula: see text] are analyzed. The model becomes the classical Erdös–Rényi (ER) network model at [Formula: see text]. When [Formula: see text], it is shown that the percolation process generates a weakly discontinuous phase transition where the order parameter exhibits an extremely abrupt transition with a significant jump in large but finite system. For [Formula: see text], the cluster size distribution at the lower pseudo-transition point does not obey the power-law behavior, indicating a strongly discontinuous phase transition. In the case of growing network, in which the collection of nodes is increasing, a smoother continuous phase transition emerges at [Formula: see text], in contrast to the weakly discontinuous phase transition of the static network. At [Formula: see text], on the other hand, probability modulation effect shows that the nature of strongly discontinuous phase transition remains the same with the static network despite the node arrival even in the thermodynamic limit. These percolation properties of the growing networks could provide useful reference for network intervention and control in practical applications in consideration of the increasing size of most actual networks.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ahmet Mert ◽  
Hasan Huseyin Celik

Abstract The feasibility of using time–frequency (TF) ridges estimation is investigated on multi-channel electroencephalogram (EEG) signals for emotional recognition. Without decreasing accuracy rate of the valence/arousal recognition, the informative component extraction with low computational cost will be examined using multivariate ridge estimation. The advanced TF representation technique called multivariate synchrosqueezing transform (MSST) is used to obtain well-localized components of multi-channel EEG signals. Maximum-energy components in the 2D TF distribution are determined using TF-ridges estimation to extract instantaneous frequency and instantaneous amplitude, respectively. The statistical values of the estimated ridges are used as a feature vector to the inputs of machine learning algorithms. Thus, component information in multi-channel EEG signals can be captured and compressed into low dimensional space for emotion recognition. Mean and variance values of the five maximum-energy ridges in the MSST based TF distribution are adopted as feature vector. Properties of five TF-ridges in frequency and energy plane (e.g., mean frequency, frequency deviation, mean energy, and energy deviation over time) are computed to obtain 20-dimensional feature space. The proposed method is performed on the DEAP emotional EEG recordings for benchmarking, and the recognition rates are yielded up to 71.55, and 70.02% for high/low arousal, and high/low valence, respectively.


2000 ◽  
Author(s):  
H. S. Tzou ◽  
J. H. Ding ◽  
W. K. Chai

Abstract Piezoelectric laminated distributed systems have broad applications in many new smart structures and structronic systems. As the shape control becomes an essential issue in practical applications, the nonlinear large deformation has to be considered, and thus, the geometrical nonlinearity has to be incorporated. Two electromechanical partial differential equations, one in the axial direction and the other in the transverse direction, are derived for the nonlinear PZT laminated beam model. The conventional approach is to neglect the axial oscillation and distributed sensing and control of the distributed laminated beam is evaluated, excluding the effect of axial oscillation. In this paper, influence of the axial displacement to the dynamics and distributed control effect is evaluated. Analysis results reveal that the axial displacement, indeed, has significant influence to the dynamic and distributed control responses of the nonlinear distributed PZT laminated beam structronics systems.


2019 ◽  
Vol 15 (1) ◽  
pp. 19-36 ◽  
Author(s):  
Wiliam Acar ◽  
Rami al-Gharaibeh

Practical applications of knowledge management are hindered by a lack of linkage between the accepted data-information-knowledge hierarchy with using pragmatic approaches. Specifically, the authors seek to clarify the use of the tacit-explicit dichotomy with a deductive synthesis of complementary concepts. The authors review appropriate segments of the KM/OL literature with an emphasis on the SECI model of Nonaka and Takeuchi. Looking beyond equating the sharing of knowledge with mere socialization, the authors deduce from more recent developments a knowledge creation, nurturing and control framework. Based on a cyclic and upward-spiraling data-information-knowledge structure, the authors' proposed model affords top managers and their consultants opportunities for capturing, debating and storing richer information – as well as monitoring their progress and controlling their learning process.


Author(s):  
Yan Xiaoxuan ◽  
Han Jinglong ◽  
Zhang Bing ◽  
Yuan Haiwei

Accurate modeling of aerothermodynamics with low computational cost takes on a crucial role for the optimization and control of hypersonic vehicles. This study examines three reduced-order models (ROMs) to provide a reliable and efficient alternative approach for obtaining the aerothermodynamics of a hypersonic control surface. Coupled computational fluid dynamics (CFD) and computational thermostructural dynamics (CTSD) approaches are used to generate the snapshots for ROMs considering the interactions between aerothermodynamics, structural dynamics and heat transfer. One ROM adopts a surrogate approach named Kriging. The second ROM is constructed by the combination of Proper Orthogonal Decomposition (POD) and Kriging, namely, POD-Kriging. The accuracy of Kriging-based ROM is higher than that of POD-Kriging-based ROM, but the efficiency is lower. Therefore, to address the shortcomings of the above two approaches, a new ROM is developed that is composed of POD and modified Chebyshev polynomials, namely, POD-Chebyshev. The ROM based on POD-Chebyshev has the best precision and efficiency among the three ROMs and generally has less than 2% average maximum error for the studied problem.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245344
Author(s):  
Jianye Zhou ◽  
Yuewen Jiang ◽  
Biqing Huang

Background Outbreaks of infectious diseases would cause great losses to the human society. Source identification in networks has drawn considerable interest in order to understand and control the infectious disease propagation processes. Unsatisfactory accuracy and high time complexity are major obstacles to practical applications under various real-world situations for existing source identification algorithms. Methods This study attempts to measure the possibility for nodes to become the infection source through label ranking. A unified Label Ranking framework for source identification with complete observation and snapshot is proposed. Firstly, a basic label ranking algorithm with complete observation of the network considering both infected and uninfected nodes is designed. Our inferred infection source node with the highest label ranking tends to have more infected nodes surrounding it, which makes it likely to be in the center of infection subgraph and far from the uninfected frontier. A two-stage algorithm for source identification via semi-supervised learning and label ranking is further proposed to address the source identification issue with snapshot. Results Extensive experiments are conducted on both synthetic and real-world network datasets. It turns out that the proposed label ranking algorithms are capable of identifying the propagation source under different situations fairly accurately with acceptable computational complexity without knowing the underlying model of infection propagation. Conclusions The effectiveness and efficiency of the label ranking algorithms proposed in this study make them be of practical value for infection source identification.


Recent applications of conventional iterative coordinate descent (ICD) algorithms to multislice helical CT reconstructions have shown that conventional ICD can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of conventional algorithm in the practical applications. Among the various iterative methods that have been studied for conventional, ICD has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 3-D optimization algorithm that uses a quadratic substitute function to upper bound the local 3-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries.


Sign in / Sign up

Export Citation Format

Share Document