A Task-oriented Approach to Art Therapy in Trauma Treatment

Art Therapy ◽  
2003 ◽  
Vol 20 (3) ◽  
pp. 138-147 ◽  
Author(s):  
Anita B. Rankin ◽  
Lindsey C. Taucher
PLoS ONE ◽  
2017 ◽  
Vol 12 (12) ◽  
pp. e0188642 ◽  
Author(s):  
Johanna Jonsdottir ◽  
Rune Thorsen ◽  
Irene Aprile ◽  
Silvia Galeri ◽  
Giovanna Spannocchi ◽  
...  

2013 ◽  
Vol 32 (6) ◽  
pp. 404-408 ◽  
Author(s):  
Catherine S. Shaker

Although studies have shown cue-based feeding can lead to earlier achievement of full oral feeding, the successful implementation of cue-based feeding has been constrained by the volume-driven culture, which has existed for many years in the NICU. This culture was built on the notion that a “better” nurse is one who could “get more in,” and infants who are “poor feeders” are ones who “can’t take enough.” The infant who feeds faster is often viewed as more skilled in this task-oriented approach.The feeding relationship and the infant’s communication about the experience of feeding may not be nurtured. This article will explain the central role of the preterm infant’s communication in successful cue-based feeding. When the infant is perceived as having meaningful behavior (i.e., communicative intent), the focus changes from a volume-driven to a co-regulated approach, through which the infant guides the caregiver. This is cue-based feeding.


Author(s):  
D. Ivanko ◽  
D. Ryumin

Abstract. Visual information plays a key role in automatic speech recognition (ASR) when audio is corrupted by background noise, or even inaccessible. Speech recognition using visual information is called lip-reading. The initial idea of visual speech recognition comes from humans’ experience: we are able to recognize spoken words from the observation of a speaker's face without or with limited access to the sound part of the voice. Based on the conducted experimental evaluations as well as on analysis of the research field we propose a novel task-oriented approach towards practical lip-reading system implementation. Its main purpose is to be some kind of a roadmap for researchers who need to build a reliable visual speech recognition system for their task. In a rough approximation, we can divide the task of lip-reading into two parts, depending on the complexity of the problem. First, if we need to recognize isolated words, numbers or small phrases (e.g. Telephone numbers with a strict grammar or keywords). Or second, if we need to recognize continuous speech (phrases or sentences). All these stages disclosed in detail in this paper. Based on the proposed approach we implemented from scratch automatic visual speech recognition systems of three different architectures: GMM-CHMM, DNN-HMM and purely End-to-end. A description of the methodology, tools, step-by-step development and all necessary parameters are disclosed in detail in current paper. It is worth noting that for the Russian speech recognition, such systems were created for the first time.


Sign in / Sign up

Export Citation Format

Share Document