scholarly journals Memory for musical tones: The impact of tonality and the creation of false memories

2019 ◽  
Author(s):  
Dominique T Vuvan ◽  
Olivia P. Lewandowska ◽  
Mark A. Schmuckler

Although the relation between tonality and musical memory has been fairly well-studied, less is known regarding the contribution of tonal-schematic expectancies to this relation. Three experiments investigated the influence of tonal expectancies on memory for single tones in a tonal melodic context. In the first experiment, listener responses indicated superior recognition of both expected and unexpected targets in a major tonal context than for moderately expected targets. Importantly, and in support of previous work on false memories, listener responses also revealed a higher false alarm rate for expected than unexpected targets. These results indicate roles for tonal schematic congruency as well as distinctiveness in memory for melodic tones. The second experiment utilized minor melodies, which weakened tonal expectancies since the minor tonality can be represented in three forms simultaneously. Finally, tonal expectancies were abolished entirely in the third experiment through the use of atonal melodies. Accordingly, the expectancy-based results observed in the first experiment were disrupted in the second experiment, and disappeared in the third experiment. These results are discussed in light of schema theory, musical expectancy, and classic memory work on the availability and distinctiveness heuristics.

2012 ◽  
Vol 19 (4) ◽  
pp. 753-761 ◽  
Author(s):  
Yanlong Cao ◽  
Yuanfeng He ◽  
Huawen Zheng ◽  
Jiangxin Yang

In order to reduce the false alarm rate and missed detection rate of a Loose Parts Monitoring System (LPMS) for Nuclear Power Plants, a new hybrid method combining Linear Predictive Coding (LPC) and Support Vector Machine (SVM) together to discriminate the loose part signal is proposed. The alarm process is divided into two stages. The first stage is to detect the weak burst signal for reducing the missed detection rate. Signal is whitened to improve the SNR, and then the weak burst signal can be detected by checking the short-term Root Mean Square (RMS) of the whitened signal. The second stage is to identify the detected burst signal for reducing the false alarm rate. Taking the signal's LPC coefficients as its characteristics, SVM is then utilized to determine whether the signal is generated by the impact of a loose part. The experiment shows that whitening the signal in the first stage can detect a loose part burst signal even at very low SNR and thusly can significantly reduce the rate of missed detection. In the second alarm stage, the loose parts' burst signal can be distinguished from pulse disturbance by using SVM. Even when the SNR is −15 dB, the system can still achieve a 100% recognition rate


2020 ◽  
Vol 17 (1) ◽  
pp. 9-23
Author(s):  
Eva D. Regnier

Emergency managers must make high-stakes decisions regarding preparation for tropical storms when there is still considerable uncertainty regarding the storm’s impacts. Forecast quality improves as lead time until the forecast events declines. Reducing the lead time required for preparation decisions can substantially improve the quality of forecasts available for decision making and thereby, reduce the expected total costs of preparations plus storm damage. Measures of forecast quality are only indirectly linked to their value in preparation decisions and changes in the parameters of those decisions—in particular lead time. This paper provides decision-relevant measures of the quality of recent National Hurricane Center forecasts from the 2014–2018 seasons, which can be used to evaluate reductions in decision lead time in terms of false alarm rate, missed detections, and expected annual costs. For decision makers in some regions with decision lead times of 48–72 hours—typical for evacuation decisions—every 6-hour reduction in required lead time can reduce the false alarm rate by more than 10%.


2013 ◽  
Vol 24 (10) ◽  
pp. 897-908
Author(s):  
Robert G. Turner

Background: A test protocol is created when individual tests are combined. Protocol performance can be calculated prior to clinical use; however, the necessary information is seldom available. Thus, protocols are frequently used with limited information as to performance. The next best strategy is to base protocol design on available information combined with a thorough understanding of the factors that determine protocol performance. Unfortunately, there is limited information as to these factors and how they interact. Purpose: The objective of this article and the next article in this issue is to examine in detail the three factors that determine protocol performance: (1) protocol criterion, (2) test correlation, (3) test performance. This article examines protocol criterion and test correlation. The next article examines the impact of individual test performance and summarizes the results of this series. The ultimate goal is to provide guidance on the formulation of a protocol using available information and an understanding of the impact of these three factors on performance. Research Design: A mathematical model is used to calculate protocol performance for different protocol criteria and test correlations while assuming that all individual tests in the protocol have the same performance. The advantages and disadvantages of the different criteria are evaluated for different test correlations. Results: A loose criterion will produce the highest protocol hit and false alarm rates; however, the false alarm rate may be unacceptably high. A strict criterion will produce the smallest protocol hit and false alarm rates; however, the hit rate may be unacceptably low. Adding tests to a protocol increases the probability that the protocol false alarm rate will be too high with a loose criterion and that the protocol hit rate will be too low with a strict criterion. The intermediate criterion, about which little has been known, provides advantages not available with the other two criteria. This criterion is much more likely to produce acceptable protocol hit and false alarm rates. It also has the potential to simultaneously produce a protocol hit rate higher, and a false alarm rate lower, than the individual tests. The intermediate criteria produce better protocol performance than the loose and strict criteria for protocols with the same number of tests. For all criteria, best protocol performance is obtained when the tests are uncorrelated and decreases as test correlation increases. When there is some test correlation, adding tests to the protocol can decrease protocol performance for a loose or strict criterion. The ability of a protocol to manipulate hit and false alarm rates, or improve performance relative to that of the individual tests, is reduced with increasing test correlation. Conclusions: The three criteria, loose, strict, and intermediate, have definite advantages and disadvantages over a large range of test correlations. Some of the advantages and disadvantages of the loose and strict criteria are impacted by test correlation. The advantages of the intermediate criteria are relatively independent of test correlation. When three or more tests are used in a protocol, consideration should be given to using an intermediate criterion, particularly if there is some test correlation. Greater test correlation diminishes the advantages of adding tests to a protocol, particularly with a loose or strict criterion. At higher test correlations, fewer tests in the protocol may be appropriate.


TAPPI Journal ◽  
2014 ◽  
Vol 13 (1) ◽  
pp. 33-41
Author(s):  
YVON THARRAULT ◽  
MOULOUD AMAZOUZ

Recovery boilers play a key role in chemical pulp mills. Early detection of defects, such as water leaks, in a recovery boiler is critical to the prevention of explosions, which can occur when water reaches the molten smelt bed of the boiler. Early detection is difficult to achieve because of the complexity and the multitude of recovery boiler operating parameters. Multiple faults can occur in multiple components of the boiler simultaneously, and an efficient and robust fault isolation method is needed. In this paper, we present a new fault detection and isolation scheme for multiple faults. The proposed approach is based on principal component analysis (PCA), a popular fault detection technique. For fault detection, the Mahalanobis distance with an exponentially weighted moving average filter to reduce the false alarm rate is used. This filter is used to adapt the sensitivity of the fault detection scheme versus false alarm rate. For fault isolation, the reconstruction-based contribution is used. To avoid a combinatorial excess of faulty scenarios related to multiple faults, an iterative approach is used. This new method was validated using real data from a pulp and paper mill in Canada. The results demonstrate that the proposed method can effectively detect sensor faults and water leakage.


2016 ◽  
Vol 13 (1) ◽  
pp. 159-168
Author(s):  
Bayram Unal

This study aims at understanding how the perceptions about migrants have been created and transferred into daily life as a stigmatization by means of public perception, media and state law implementations.  The focus would be briefly what kind of consequences these perceptions and stigmatization might lead. First section will examine the background of migration to Turkey briefly and make a summary of migration towards Turkey by 90s. Second section will briefly evaluate the preferential legal framework, which constitutes the base for official discourse differentiating the migrants and implementations of security forces that can be described as discriminatory. The third section deals with the impact of perceptions influential in both formation and reproduction of inclusive and exclusive practices towards migrant women. Additionally, impact of public perception in classifying the migrants and migratory processes would be dealt in this section.


Author(s):  
Sherif S. Ishak ◽  
Haitham M. Al-Deek

Pattern recognition techniques such as artificial neural networks continue to offer potential solutions to many of the existing problems associated with freeway incident-detection algorithms. This study focuses on the application of Fuzzy ART neural networks to incident detection on freeways. Unlike back-propagation models, Fuzzy ART is capable of fast, stable learning of recognition categories. It is an incremental approach that has the potential for on-line implementation. Fuzzy ART is trained with traffic patterns that are represented by 30-s loop-detector data of occupancy, speed, or a combination of both. Traffic patterns observed at the incident time and location are mapped to a group of categories. Each incident category maps incidents with similar traffic pattern characteristics, which are affected by the type and severity of the incident and the prevailing traffic conditions. Detection rate and false alarm rate are used to measure the performance of the Fuzzy ART algorithm. To reduce the false alarm rate that results from occasional misclassification of traffic patterns, a persistence time period of 3 min was arbitrarily selected. The algorithm performance improves when the temporal size of traffic patterns increases from one to two 30-s periods for all traffic parameters. An interesting finding is that the speed patterns produced better results than did the occupancy patterns. However, when combined, occupancy–speed patterns produced the best results. When compared with California algorithms 7 and 8, the Fuzzy ART model produced better performance.


2008 ◽  
Author(s):  
Kenneth Ranney ◽  
Hiralal Khatri ◽  
Jerry Silvious ◽  
Kwok Tom ◽  
Romeo del Rosario

2010 ◽  
Vol 95 (Supplement 1) ◽  
pp. Fa25-Fa25
Author(s):  
N. Farah ◽  
M. Kennelly ◽  
V. Donnelly ◽  
B. Stuart ◽  
M. Turner

Sign in / Sign up

Export Citation Format

Share Document