Cooperative phenomena in apparent movement perception of random-dot cinematograms

1984 ◽  
Vol 24 (12) ◽  
pp. 1781-1788 ◽  
Author(s):  
Jih Jie Chang ◽  
Bela Julesz
2002 ◽  
Vol 718 ◽  
Author(s):  
Jian Yu ◽  
X. J. Meng ◽  
J.L. Sun ◽  
G.S. Wang ◽  
J.H. Chu

AbstractIn this paper, size-induced ferroelectricit yweakening, phase transformation, and anomalous lattice expansion are observed in nanocrystalline BaTiO3 (nc-BaTiO3) deriv ed b y low temperature hydrothermal methods, and they are w ellunderstood using the terms of the long-range interaction and its cooperative phenomena altered by particle size in covalen t ionic nanocrystals. In cubic nc-BaTiO3, five modes centerd at 186, 254, 308, 512 and 716 cm-1 are observed Raman active in cubic nanophase, and they are attributed to local rhombohedral distortion breaking inversion-symmetry in cubic nanophase. The254 and 308 cm-1 modes are significantly affected not only by the concentration of hydroxyl defects, but also their particular configuration. And the 806 cm-1 modes found to be closely associated with OH - absorbed on grain boundaries.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


Sign in / Sign up

Export Citation Format

Share Document