The Feeling of Music Past: How Listeners Remember Musical Affect

2004 ◽  
Vol 22 (1) ◽  
pp. 15-39 ◽  
Author(s):  
Alexander Rozin ◽  
Paul Rozin ◽  
Emily Goldberg

This study was conducted to determine how listeners derive global evaluations of past musical durations from moment-to-moment experience. Participants produced moment-to-moment affective intensity ratings by pressing a pressure-sensitive button while listening to various selections. They later reported the remembered affective intensity of each example. The data suggest that the assumption that remembered affect equals the sum of all momentary affects fundamentally misrepresents how listeners encode and label past affective experiences. The duration of particular rather than uniform episodes contributes minimally to remembered affect (duration neglect). Listeners rely on the peak of affective intensity during a selection, the last moment, and moments that are more emotionally intense than immediately previous moments to determine postperformance ratings. The peak proves to be the strongest predictor of remembered affect. We derive a formula that takes moment-to-moment experience as input and predicts how listeners will remember musical affect. The formula is a better predictor of postperformance affect than any other on-line characteristic considered. Last, the utility of the formula is demonstrated through a brief examination of compositional decisions in a string quartet movement by Borodin and one typical format of four-movement symphonies from the classical period.

2016 ◽  
Vol 45 (4) ◽  
pp. 600-608 ◽  
Author(s):  
Carolina Labbé ◽  
Donald Glowinski ◽  
Didier Grandjean

Scherer and Zentner (2001) propose that affective experiences might be the product of a multiplicative function between structural, performance, listener, and contextual related features. Yet research on the effects of structure, and particularly texture, has mostly focused on perceived emotions. We therefore sought to test the effects of structural features on subjective musical experiences in a listening study by manipulating the performance, solo versus ensemble, of five segments of a piece for string quartet, while also exploring the impact of listener features such as musical training, listening habits and stable dispositions such as empathy. We found that participants ( N = 144, 78% female; Mage = 22.74 years, SD = 5.13) felt like moving more (ME) and perceived their physiological rhythms change more (VE) during ensemble compared to solo conditions. Moreover, ME significantly predicted positive emotions, such as Wonder and Power, while VE significantly predicted both positive and negative emotions, such as Tension and Nostalgia. We also found direct main and interaction effects of both segment and performance factors on all four emotion models. We believe these results support Scherer and Zentner’s model and show the importance of considering the interaction between compositional and instrumental texture when studying music-induced emotions.


Author(s):  
Susan E. George

This chapter is concerned with a novel pen-based interface for (handwritten) music notation. The chapter makes a survey of the current scope of on-line (or dynamic) handwritten input of music notation, presenting the outstanding problems in recognition. A solution using the multi-layer perceptron artificial neural network is presented explaining experiments in music symbol recognition from a study involving notation writing from some 25 people using a pressure-sensitive digitiser for input. Results suggest that a voting system among networks trained to recognize individual symbols produces the best recognition rate in the order of 92% for correctly recognising a positive example of a symbol and 98% in correctly rejecting a negative example of the symbol. A discussion is made of how this approach can be used in an interface for a pen-based music editor. The motivation for this chapter includes (i) the practical need for a pen-based interface capable of recognizing unconstrained handwritten music notation, (ii) the theoretical challenges that such a task presents for pattern recognition and (iii) the outstanding neglect of this topic in both academic and commercial respects.


2017 ◽  
Vol 4 (3) ◽  
pp. 233-256 ◽  
Author(s):  
Jinying Li

Abstract This essay interrogates the transmedial, transnational expansion of platforms by analyzing the mediation functions and affective experiences of a discursive interface, danmaku. It is a unique interface design originally featured by the Japanese video-sharing platform Niconico to render user comments flying over videos on screen. The danmaku interface has been widely adopted in China by video-streaming websites, social media, and theatrical film exhibitions. Examining the fundamental incoherence that is structured by the interface – the incoherence between content and platform, between the temporal experiences of pseudo-live-ness and spectral past – the paper underlines the notion of ‘contact’ as the central logic of platforms and argues that danmaku functions as a volatile contact zone among conflicting modes, logics, and structures of digital media. Such contested contacts generate affective intensity of media regionalism, in which the transmedial/transnational processes managed by platforms in material/textual traffic are mapped by the flow of affect on the interface.


Author(s):  
William Krakow

In the past few years on-line digital television frame store devices coupled to computers have been employed to attempt to measure the microscope parameters of defocus and astigmatism. The ultimate goal of such tasks is to fully adjust the operating parameters of the microscope and obtain an optimum image for viewing in terms of its information content. The initial approach to this problem, for high resolution TEM imaging, was to obtain the power spectrum from the Fourier transform of an image, find the contrast transfer function oscillation maxima, and subsequently correct the image. This technique requires a fast computer, a direct memory access device and even an array processor to accomplish these tasks on limited size arrays in a few seconds per image. It is not clear that the power spectrum could be used for more than defocus correction since the correction of astigmatism is a formidable problem of pattern recognition.


Author(s):  
A.M.H. Schepman ◽  
J.A.P. van der Voort ◽  
J.E. Mellema

A Scanning Transmission Electron Microscope (STEM) was coupled to a small computer. The system (see Fig. 1) has been built using a Philips EM400, equipped with a scanning attachment and a DEC PDP11/34 computer with 34K memory. The gun (Fig. 2) consists of a continuously renewed tip of radius 0.2 to 0.4 μm of a tungsten wire heated just below its melting point by a focussed laser beam (1). On-line operation procedures were developped aiming at the reduction of the amount of radiation of the specimen area of interest, while selecting the various imaging parameters and upon registration of the information content. Whereas the theoretical limiting spot size is 0.75 nm (2), routine resolution checks showed minimum distances in the order 1.2 to 1.5 nm between corresponding intensity maxima in successive scans. This value is sufficient for structural studies of regular biological material to test the performance of STEM over high resolution CTEM.


Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


Author(s):  
G.Y. Fan ◽  
J.M. Cowley

In recent developments, the ASU HB5 has been modified so that the timing, positioning, and scanning of the finely focused electron probe can be entirely controlled by a host computer. This made the asynchronized handshake possible between the HB5 STEM and the image processing system which consists of host computer (PDP 11/34), DeAnza image processor (IP 5000) which is interfaced with a low-light level TV camera, array processor (AP 400) and various peripheral devices. This greatly facilitates the pattern recognition technique initiated by Monosmith and Cowley. Software called NANHB5 is under development which, instead of employing a set of photo-diodes to detect strong spots on a TV screen, uses various software techniques including on-line fast Fourier transform (FFT) to recognize patterns of greater complexity, taking advantage of the sophistication of our image processing system and the flexibility of computer software.


Author(s):  
John F. Mansfield ◽  
Douglas C. Crawford

A method has been developed that allows on-line measurement of the thickness of crystalline materials in the analytical electron microscope. Two-beam convergent beam electron diffraction (CBED) patterns are digitized from a JEOL 2000FX electron microscope into an Apple Macintosh II microcomputer via a Gatan #673 CCD Video Camera and an Imaging Systems Technology Video 1000 frame-capture board. It is necessary to know the lattice parameters of the sample since measurements are made of the spacing of the diffraction discs in order to calibrate the pattern. The sample thickness is calculated from measurements of the spacings of the fringes that are seen in the diffraction discs. This technique was pioneered by Kelly et al, who used the two-beam dynamic theory of MacGillavry relate the deviation parameter (Si) of the ith fringe from the exact Bragg condition to the specimen thickness (t) with the equation:Where ξg, is the extinction distance for that reflection and ni is an integer.


Sign in / Sign up

Export Citation Format

Share Document