Multiple Views

Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Our discussions in previous chapters have centered on the creation and interaction of visual objects in a virtual 3D world. The objects and scenes constructed, however, will ultimately have to be shown on appropriate display devices such as a single PC monitor, a stereoscopic head mount display (HMD), or a multi screen project system (Salisbury, Farr, & Moore, 1999). Also, it is quite often that we may need to show different views of the created universe at the same time for certain applications. Even for the case of a single PC monitor, showing different views of the same objects in different windows will be instructive and informative, and may be essential in some cases. While we have been using a single simple view in earlier chapters, Java 3D has inherent capabilities to give multiple views of the created 3D world for supporting, say, the use of head tracking HMD systems for user to carry out 3D navigation (Yabuki, Machinaka, & Li, 2006). In this chapter, we will discuss how multiple views can be readily generated after outlining the view model and the various components that make up the simple universe view used previously.

2021 ◽  
pp. 129-147
Author(s):  
T. A. Mirvoda ◽  
M. V. Stroganov ◽  
◽  

(Discussion based on the materials of the defense of T. A. Mirvoda’s PhD thesis “Poetics of a modern children’s “scary” narrative in oral tradition and the Internet”)Within the framework of the conversation, the article discusses psychological background of the emergence and existence of scary stories in the modern online space which are being accompanied with audio-visual objects and ritual practices, which on a par with the oral folk art of similar themes form the network mythology of horrors (the creepypasta). The principles of genre stratification of children’s «scary» narrative folklore and the creation of a corresponding index of characters and plots, as well as the differentiation of scary stories and evocations, proposed in the dissertation research by T. A. Mirvoda, are highlighted.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

How the properties of virtual 3D objects can be specified and defined has been discussed in earlier chapters. However, how a certain virtual object will appear to the user will in general depends also on human visual impression and perception, which depends to a large extent on the lighting used in illumination. As an example, watching a movie in a dark theatre and under direct sunlight will give rise to different feeling and immersion even though the scenes are the same. Thus, in addition to defining the skeleton of a virtual object by using geometry objects in Java 3D in Chapter III, setting the appearance attributes in Chapter IV and applying texture in Chapter V to give a realistic skin to the virtual object, appropriate environmental concerns such as light, background and even fog are often necessary to make the virtual object appear as realistic to the user as possible. In this chapter, we will discuss topics related to the latter environmental issues. The use of proper lighting is thus crucial to ensure that the 3D universe created is realistic in feeling and adds to strong emotional impressions in any application. For this purpose, Java 3D has a variety of light sources that can be selected and tailored to different scenarios. Technically, light rays are not rendered. In fact, their effects will only become visible once they hit an object and reflect to the viewer. Of course, as with any object in the real world, the reflection depends on the material attributes of the objects. In this chapter, we will discuss the use of different types of light source and their effects after describing the lighting properties or materials of visual objects. We will then outline the use of fogging techniques to turn a hard and straight computer image into a more realistic and smoother scene before discussing methods for immersing active visual objects in a background.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Of all the human perceptions, two of the most important ones are perhaps vision and sound, for which we have developed highly specialized sensors over millions of years of evolution. The creation of a realistic virtual world therefore calls for the development of realistic 3D virtual objects and sceneries supplemented by associated sounds and audio signals. The development of 3D visual objects is of course the main domain of Java 3D. However, as in watching a movie, it is also essential to have realistic sound and audio in some applications. In this chapter, we will discuss how sound and audio can be added and supported by Java 3D. The Java 3D API provides some functionalities to add and control sound in a 3D spatialized manner. It also allows the rendering of aural characteristics for the modeling of real world, synthetic or special acoustical effects (Warren, 2006). From a programming point of view, the inclusion of sound is similar to the addition of light. Both are the results of adding nodes to the scene graph for the virtual world. The addition of a sound node can be accomplished by the abstract Sound class, under which there are three subclasses on BackgroundSound, PointSound, and ConeSound (Osawa, Asai, Takase, & Saito, 2001). Multiple sound sources, each with a reference sound file and associated methods for control and activation, can be included in the scene graph. The relevant sound will become audible whenever the scheduling bound associated with the sound node intersects the activation volume of the listener. By creating an AuralAttributes object and attaching it to a SoundScape leaf node for a certain sound in the scene graph, we can also specify the use of certain acoustical effects in the rendering of the sound. This is done through using the various methods to change important acoustic parameters in the Aura lAttributes object.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In the last chapter, the creation of the skeletons or shapes of 3D objects has been discussed through the use of geometry objects in Java 3D. In order for these objects to be as realistic as possible to the user, it is often necessary for these objects to be covered with appropriate “skins” under good lighting conditions. In Java 3D, details on the skins can be specified by using color, texture, and material, which can be specified through the associated appearance objects. In this chapter, all the important attributes, including the ways for rendering points, lines and polygons as well as color and material, for an appearance object will be discussed. The use of texturing will be covered in the next chapter. As mentioned earlier, the creation of a virtual 3D object in a virtual world can be carried out using a Shape3D object in the associated scene graph. This object can reference a geometry object in Java 3D to create the skeleton of the virtual object. In addition, it can also reference an appearance object for specifying the skin of the virtual object. On its own, an appearance object does not contain information on how the object will look like. However, it can reference other objects, such as “attribute objects,” “texture-related objects,” and “material objects,” for getting appearance information to complement the object geometry. Since the use of an appearance object to enhance the geometry in the creation of a virtual universe is a basic requirement in Java 3D, we will now discuss some important aspects of appearance object in this chapter.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

To create 3D graphics, we have to build graphics or visual objects and position them appropriately in a virtual scene. In general, there are three possible approaches for doing this (Java 3D geometry, 2006). One approach is to make use of geometry utility classes to create basic geometric shapes or primitives. The basic shapes are boxes, spheres, cones, and cylinders. Another approach is to employ commercial modeling tools, such as 3D studio max, and have the results loaded into Java 3D. Lastly, custom geometrical shapes or objects can also be created by defining their vertices. While using utility classes or commercial modeling tools may be simpler and less time consuming, creating objects based on specifying vertices corresponds to the most general method. From a certain angle, the latter can in fact be regarded as the foundation from which the other approaches are based. The main thrust in this chapter will thus be on how objects can be built from their vertices, with some brief discussion on using utility classes presented toward the end of the chapter.


1998 ◽  
Vol 7 (6) ◽  
pp. 638-649 ◽  
Author(s):  
Koichi Hirota ◽  
Michitaka Hirose

The sensations of touch and force have come to be recognized as essential factors in virtual reality, and many efforts have been made to develop display devices that reproduce these sensations. Such devices are divided into two categories: wearing and nonwearing. In this paper, a method is proposed for representing virtual objects of arbitrary shapes using a nonwearing device. Based on this method, a device was fabricated to describe our approach. Our prototype device was designed to approximately represent part of the surface of a virtual object as a tangential surface (i.e., partial surface) to the user's fingertip. The device was implemented as a mechanism with five degrees of freedom that are commonly used to measure the fingertip position and to present the partial surface to the fingertip. The mechanism was controlled through two calculation loops: a model loop that gives a tangential surface from the fingertip position and the shape of objects, and a servo loop that manages the mechanism to represent the given tangential surface by the partial surface. Also, a stereoscopic, head-tracking visual system was implemented to realize the combined presentation of visual information and the partial surface. As an example of the applications of the environment, a task of writing characters was simulated. From the observation of the performance of the task, the presentation of the partial surface was proved to have an effect on decreasing blur and dragging in written characters.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

One of the most useful and important advantages of 3D graphics rendering and applications is that there is the possibility for the user to navigate through the 3D virtual world in a seamless fashion. Complicated visual objects can be better appreciated from different angles and manipulation of these objects can be carried out in the most natural manner. To support this important function of navigation, the user will often need to use a variety of input devices such as the keyboard, mouse, and joystick in a fashion that befits a 3D scenario. Also, collision handling is important as it will be unnatural if the user can, say, walk through solid walls in the virtual world. The functionality of navigation therefore has a close relationship with input devices and collision detection, all of which can be handled in Java 3D through a variety of straightforward but not so flexible utility classes as well as more complicated but at the same time more flexible user defined methods. The main requirement of navigation is of course to handle or refresh changes in the rendered 3D view as the user moves around in the virtual universe (Wang, 2006). As illustrated in Figure 1, this will require a modification of the platform transform as the user changes his or her position in the universe. Essentially, as will be illustrated in them next section, we will first need to retrieve the ViewPlatformTransform object from the SimpleUniverse object.


2020 ◽  
Vol 43 ◽  
Author(s):  
Stefen Beeler-Duden ◽  
Meltem Yucel ◽  
Amrisha Vaish

Abstract Tomasello offers a compelling account of the emergence of humans’ sense of obligation. We suggest that more needs to be said about the role of affect in the creation of obligations. We also argue that positive emotions such as gratitude evolved to encourage individuals to fulfill cooperative obligations without the negative quality that Tomasello proposes is inherent in obligations.


2012 ◽  
Vol 21 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Susan Fager ◽  
Tom Jakobs ◽  
David Beukelman ◽  
Tricia Ternus ◽  
Haylee Schley

Abstract This article summarizes the design and evaluation of a new augmentative and alternative communication (AAC) interface strategy for people with complex communication needs and severe physical limitations. This strategy combines typing, gesture recognition, and word prediction to input text into AAC software using touchscreen or head movement tracking access methods. Eight individuals with movement limitations due to spinal cord injury, amyotrophic lateral sclerosis, polio, and Guillain Barre syndrome participated in the evaluation of the prototype technology using a head-tracking device. Fourteen typical individuals participated in the evaluation of the prototype using a touchscreen.


Sign in / Sign up

Export Citation Format

Share Document