gesture understanding
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 1)

2017 ◽  
Vol 89 (3) ◽  
pp. e245-e260 ◽  
Author(s):  
Elizabeth M. Wakefield ◽  
Miriam A. Novack ◽  
Susan Goldin-Meadow

2008 ◽  
Vol 02 (01) ◽  
pp. 5-19 ◽  
Author(s):  
RONNIE B. WILBUR ◽  
EVGUENIA MALAIA

This paper considers neurological, formational and functional similarities between gestures and signed verb predicates. From analysis of verb sign movement, we offer suggestions for analyzing gestural movement (motion capture, kinematic analysis, trajectory internal structure). From analysis of verb sign distinctions, we offer suggestions for analyzing co-speech gesture functions.


2005 ◽  
Vol 02 (02) ◽  
pp. 181-201 ◽  
Author(s):  
DONALD SOFGE ◽  
MAGDALENA BUGAJSKA ◽  
J. GREGORY TRAFTON ◽  
DENNIS PERZANOWSKI ◽  
SCOTT THOMAS ◽  
...  

One of the great challenges of putting humanoid robots into space is developing cognitive capabilities for the robots with an interface that allows human astronauts to collaborate with the robots as naturally and efficiently as they would with other astronauts. In this joint effort with NASA and the entire Robonaut team, we are integrating natural language and gesture understanding, spatial reasoning incorporating such features as human–robot perspective taking, and cognitive model-based understanding to achieve a high level of human–robot interaction. Building greater autonomy into the robot frees the human operator(s) from focusing strictly on the demands of operating the robot, and instead allows the possibility of actively collaborating with the robot to focus on the task at hand. By using shared representations between the human and robot, and enabling the robot to assume the perspectives of the human, the humanoid robot may become a more effective collaborator with a human astronaut for achieving mission objectives in space.


1997 ◽  
Vol 352 (1358) ◽  
pp. 1257-1265 ◽  
Author(s):  
Aaron F. Bobick

This paper presents several approaches to the machine perception of motion and discusses the role and levels of knowledge in each. In particular, different techniques of motion understanding as focusing on one of movement, activity or action are described. Movements are the most atomic primitives, requiring no contextual or sequence knowledge to be recognized; movement is often addressed using either view–invariant or view–specific geometric techniques. Activity refers to sequences of movements or states, where the only real knowledge required is the statistics of the sequence; much of the recent work in gesture understanding falls within this category of motion perception. Finally, actions are larger–scale events, which typically include interaction with the environment and causal relationships; action understanding straddles the grey division between perception and cognition, computer vision and artificial intelligence. These levels are illustrated with examples drawn mostly from the group's work in understanding motion in video imagery. It is argued that the utility of such a division is that it makes explicit the representational competencies and manipulations necessary for perception.


Sign in / Sign up

Export Citation Format

Share Document