Behavioural Specification for Hierarchical Object Composition

Author(s):  
Răzvan Diaconescu
Keyword(s):  
2020 ◽  
pp. 243-260
Author(s):  
L. V. Ozolinya ◽  

For the first time, the paper provides the analysis of the Oroc language object as a syntactic unit combining the semantic and functional aspects of transitive or non-transitive verbs. In the Manchu-Tungus languages, the object is found to be expressed in the morphological forms of the case: direct – in the accusative case and the possessive forms of the designative case, indirect – in the forms of oblique cases. Constructions with indirect objects, the positions of which are filled with case forms of nouns, designate the objects on which the action is aimed, objects from which the action is sent or evaded, objects-addresses, objectsinstruments, etc. Both transitive or non-transitive verbs can take the position of the predicate. The necessary (direct object) and permissible (indirect object) composition of objects in the verb is determined by its valences: bivalent verbs open subjective (subject) and objective (direct object) valences; trivalent verbs reveal subjective, subjective-objective (part of the subject or indirect subject) and objective (indirect object) valences.


2020 ◽  
Vol 10 (23) ◽  
pp. 8679
Author(s):  
Jaehyun Lee ◽  
Sungjae Ha ◽  
Philippe Gentet ◽  
Leehwan Hwang ◽  
Soonchul Kwon ◽  
...  

As highly immersive virtual reality (VR) content, 360° video allows users to observe all viewpoints within the desired direction from the position where the video is recorded. In 360° video content, virtual objects are inserted into recorded real scenes to provide a higher sense of immersion. These techniques are called 3D composition. For a realistic 3D composition in a 360° video, it is important to obtain the internal (focal length) and external (position and rotation) parameters from a 360° camera. Traditional methods estimate the trajectory of a camera by extracting the feature point from the recorded video. However, incorrect results may occur owing to stitching errors from a 360° camera attached to several high-resolution cameras for the stitching process, and a large amount of time is spent on feature tracking owing to the high-resolution of the video. We propose a new method for pre-visualization and 3D composition that overcomes the limitations of existing methods. This system achieves real-time position tracking of the attached camera using a ZED camera and a stereo-vision sensor, and real-time stabilization using a Kalman filter. The proposed system shows high time efficiency and accurate 3D composition.


Sign in / Sign up

Export Citation Format

Share Document