Video segment indexing through classification and interactive view-based query

Author(s):  
John Chung-Mong Lee ◽  
Wei Xiong ◽  
Ding-Gang Shen ◽  
Ruihua Ma
2010 ◽  
Vol 1 ◽  
Author(s):  
Michael McCarthy

AbstractAn important priority for the English Profile programme is to incorporate empirical evidence of the spoken language into the Common European Framework (CEFR). At present, the CEFR descriptors relating to the spoken language include references to fluency and its development as the learner moves from one level to another. This article offers a critique of the monologic bias of much of our current approach to spoken fluency. Fluency undoubtedly involves a degree of automaticity and the ability quickly to retrieve ready-made chunks of language. However, fluency also involves the ability to create flow and smoothness across turn-boundaries and can be seen as an interactive phenomenon in discourse. The article offers corpus evidence for the notion of confluence, that is the joint production of flow by more than one speaker, focusing in particular on turn-openings and closings. It considers the implications of an interactive view of fluency for pedagogy, assessment and in the broader social context.


2002 ◽  
Vol 11 (5) ◽  
pp. 497-508 ◽  
Author(s):  
A.M. Ferman ◽  
A.M. Tekalp ◽  
R. Mehrotra

2006 ◽  
Vol 17 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Janine Möbes ◽  
Jürgen Lambrecht ◽  
Wido Nager ◽  
Andreas Büchner ◽  
Anke Lesinski-Schiedat ◽  
...  

Zusammenfassung: Mit Hilfe der elektrischen Reizung des Hörnervs durch implantierte Elektroden in die Hörschnecke (Cochlea-Implantat: CI) kann bei Ertaubten die Fähigkeit zur Wahrnehmung akustischer Sprachlaute wieder hergestellt werden. Aufgrund der herabgesetzten akustische Qualität der Signale ziehen diese zusätzliche visuelle Informationen heran. Akustische Sprachreize (zweisilbige Substantive) wurden zeitgleich zu einem Video-Segment mit dem Gesicht des Sprechers dargeboten, das entweder dem akustischen Wort kongruente (z. B. Audio: Hotel, Video: Hotel) oder inkongruente Information (z. B. Audio: Hotel, Video: Insel) aussprach. Die Analyse der Verhaltensdaten ergab, dass CI-Patienten deutlich von der zusätzlichen Darbietung des Sprechergesichtes profitieren, um Sprachlaute zu verstehen. Auch Normalhörende nutzen visuelle Informationen, vor allem, wenn die akustischen Signale verrauscht und schwer verständlich sind. Die audiovisuelle Sprachverarbeitung löst bei CI-Nutzern und Normalhörenden unterschiedliche Amplitudenverläufe im ereigniskorrelierten Potenzial aus. Dabei zeigen sich vor allem Unterschiede im okzipitalen Bereich, was als Reorganisation nach Hördeprivation bei CI-Patienten verstanden werden kann.


Author(s):  
Yoshiya Ishida ◽  
Yuu Arimatsu ◽  
Lyu Kaixie ◽  
Go Takagi ◽  
Kunihiro Noda ◽  
...  
Keyword(s):  

Author(s):  
Kevin J. Gucwa ◽  
Harry H. Cheng

The design of RoboSim, a virtual environment for modular robots which controls simulated robots with code written for the hardware robots without modification, is described in detail in this paper along with its applications in educational environments. RoboSim integrates into the Ch programming environment, a C/C++ interpreter, that provides the ability to remotely control robots through interpreted C/C++ code allowing users to alternate between hardware and virtual robots without modifying the code. Open source software projects Open Dynamics Engine, OpenSceneGraph, and Qt are employed to produce the virtual environment and user interface which provide the capability of running on all major software platforms. The design of the software includes multiple library modules each specific to a particular task; therefore the simulation library and Graphical User Interface (GUI) can link against only the necessary libraries. The GUI links against the graphical library and XML library to give an interactive view of the RoboSim Scene as users are adding robots and obstacles into both the GUI and simulation. Execution of Ch code generates a new RoboSim Scene window which has the entire simulation that utilizes the simulation, graphical, xml, and callback libraries, in addition to the identical Scene from the GUI. It generates its own window for the user to view and interact with the progress of the simulation.


2005 ◽  
Vol 25 (1) ◽  
pp. 48-53 ◽  
Author(s):  
Y. Kakehi ◽  
M. Iida ◽  
T. Naemura ◽  
Y. Shirai ◽  
M. Matsushita ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document