Audio

Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Of all the human perceptions, two of the most important ones are perhaps vision and sound, for which we have developed highly specialized sensors over millions of years of evolution. The creation of a realistic virtual world therefore calls for the development of realistic 3D virtual objects and sceneries supplemented by associated sounds and audio signals. The development of 3D visual objects is of course the main domain of Java 3D. However, as in watching a movie, it is also essential to have realistic sound and audio in some applications. In this chapter, we will discuss how sound and audio can be added and supported by Java 3D. The Java 3D API provides some functionalities to add and control sound in a 3D spatialized manner. It also allows the rendering of aural characteristics for the modeling of real world, synthetic or special acoustical effects (Warren, 2006). From a programming point of view, the inclusion of sound is similar to the addition of light. Both are the results of adding nodes to the scene graph for the virtual world. The addition of a sound node can be accomplished by the abstract Sound class, under which there are three subclasses on BackgroundSound, PointSound, and ConeSound (Osawa, Asai, Takase, & Saito, 2001). Multiple sound sources, each with a reference sound file and associated methods for control and activation, can be included in the scene graph. The relevant sound will become audible whenever the scheduling bound associated with the sound node intersects the activation volume of the listener. By creating an AuralAttributes object and attaching it to a SoundScape leaf node for a certain sound in the scene graph, we can also specify the use of certain acoustical effects in the rendering of the sound. This is done through using the various methods to change important acoustic parameters in the Aura lAttributes object.

Author(s):  
BARTOLOMIEJ SKOWRON ◽  

From an ontological point of view, virtuality is generally considered a simulation: i.e. not a case of true being, and never more than an illusory copy, referring in each instance to its real original. It is treated as something imagined — and, phenomenologically speaking, as an intentional object. It is also often characterized as fictive. On the other hand, the virtual world itself is extremely rich, and thanks to new technologies is growing with unbelievable speed, so that it now influences the real world in quite unexpected ways. Thus, it is also sometimes considered real. In this paper, against those who would regard virtuality as fictional or as real, I claim that the virtual world straddles the boundary between these two ways of existence: that it becomes real. I appeal to Roman Ingarden’s existential ontology to show that virtual objects become existentially autonomous, and so can be attributed a form of actuality and causal efficaciousness. I conclude that the existential autonomy and actuality of virtual objects makes them count as real objects, but also means that they undergo a change in their mode of existence.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In Chapter VII, we discussed how animation can be applied in Java 3D to increase the visual impact of a virtual 3D world and illustrate the dynamic of the various 3D objects to the user (Tate, Moreland, & Bourne, 2001). In this chapter, we will continue this process to make the virtual 3D universe even more interesting and appealing by adding the ability for the user to interact with the 3D objects being rendered. In Java 3D, both animation and interaction can be accomplished through the use of the behavior class. Having discussed how this class helps to carry out animation in the last chapter, we will now concentrate on the mechanism of using behavior class to achieve interaction. Technically, the behavior class is an abstract class with mechanisms for the scene graph to be changed. Being an extension of the leaf class, it can also be a part of a normal scene. In particular, it may be a leaf node in the scene graph and can be placed in the same way as geometry is placed. For instance, in an application where it is necessary to render and control a rotating cube, the rotation behavior for the animation and interaction can be placed under the same transform group as the geometry object for rendering the cube. The main objective of adding a behavior object in a scene graph is of course to change the scene graph in response to a stimulus in the form of, say, pressing a key, moving a mouse, colliding objects, or a combination of these and other events. The change in the virtual 3D world may consist of translating or rotating objects, removing some objects, changing the attributes of others, or any other desirable outcomes in the specific application.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

In the last chapter, the creation of the skeletons or shapes of 3D objects has been discussed through the use of geometry objects in Java 3D. In order for these objects to be as realistic as possible to the user, it is often necessary for these objects to be covered with appropriate “skins” under good lighting conditions. In Java 3D, details on the skins can be specified by using color, texture, and material, which can be specified through the associated appearance objects. In this chapter, all the important attributes, including the ways for rendering points, lines and polygons as well as color and material, for an appearance object will be discussed. The use of texturing will be covered in the next chapter. As mentioned earlier, the creation of a virtual 3D object in a virtual world can be carried out using a Shape3D object in the associated scene graph. This object can reference a geometry object in Java 3D to create the skeleton of the virtual object. In addition, it can also reference an appearance object for specifying the skin of the virtual object. On its own, an appearance object does not contain information on how the object will look like. However, it can reference other objects, such as “attribute objects,” “texture-related objects,” and “material objects,” for getting appearance information to complement the object geometry. Since the use of an appearance object to enhance the geometry in the creation of a virtual universe is a basic requirement in Java 3D, we will now discuss some important aspects of appearance object in this chapter.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

One of the most useful and important advantages of 3D graphics rendering and applications is that there is the possibility for the user to navigate through the 3D virtual world in a seamless fashion. Complicated visual objects can be better appreciated from different angles and manipulation of these objects can be carried out in the most natural manner. To support this important function of navigation, the user will often need to use a variety of input devices such as the keyboard, mouse, and joystick in a fashion that befits a 3D scenario. Also, collision handling is important as it will be unnatural if the user can, say, walk through solid walls in the virtual world. The functionality of navigation therefore has a close relationship with input devices and collision detection, all of which can be handled in Java 3D through a variety of straightforward but not so flexible utility classes as well as more complicated but at the same time more flexible user defined methods. The main requirement of navigation is of course to handle or refresh changes in the rendered 3D view as the user moves around in the virtual universe (Wang, 2006). As illustrated in Figure 1, this will require a modification of the platform transform as the user changes his or her position in the universe. Essentially, as will be illustrated in them next section, we will first need to retrieve the ViewPlatformTransform object from the SimpleUniverse object.


2019 ◽  
pp. 37-47
Author(s):  
Yao Yueqin ◽  
Oleksiy Kozlov ◽  
Oleksandr Gerasin ◽  
Galyna Kondratenko

Analysis and formalization of the monitoring and automatic control tasks of the MR for the movement and execution of various types of technological operations on inclined and vertical ferromagnetic surfaces are obtained. Generalized structure of mobile robotic complex is shown with main subsystems consideration. Critical analysis of the current state of the problem of development of universal structures of mobile robots (MRs) for the various types of technological operations execution and elaborations of computerized systems for monitoring and control of MR movement is done. In particular, wheeled, walked and crawler type MRs with pneumatic, vacuum-propeller, magnetic and magnetically operated clamping devices to grip with vertical and ceiling surfaces are reviewed. The constructive features of the crawler MR with magnetic clamping devices capable of moving along sloping ferromagnetic surfaces are considered. The basic technical parameters of the MR are shown for the further synthesis of computerized monitoring and automatic control systems. Formalization of the tasks of monitoring and control of the MR positioning at the processing of large area ferromagnetic surfaces is considered from the point of view of control theory.


Author(s):  
B. A. Katsnelson ◽  
M. P. Sutunkova ◽  
N. A. Tsepilov ◽  
V. G. Panov ◽  
A. N. Varaksin ◽  
...  

Sodium fluoride solution was injected i.p. to three groups of rats at a dose equivalent to 0.1 LD50 three times a week up to 18 injections. Two out of these groups and two out of three groups were sham-injected with normal saline and were exposed to the whole body impact of a 25 mT static magnetic field (SMF) for 2 or 4 hr a day, 5 times a week. Following the exposure, various functional and biochemical indices were evaluated along with histological examination and morphometric measurements of the femur in the differently exposed and control rats. The mathematical analysis of the combined effects of the SMF and fluoride based on the a response surface model demonstrated that, in full correspondence with what we had previously found for the combined toxicity of different chemicals, the combined adverse action of a chemical plus a physical agent was characterized by a tipological diversity depending not only on particular effects these types were assessed for but on the dose and effect levels as well. From this point of view, the indices for which at least one statistically significant effect was observed could be classified as identifying (I) mainly single-factor action; (II) additive unidirectional action; (III) synergism (superadditive unidirectional action); (IV) antagonism, including both subadditive unidirectional action and all variants of contradirectional action.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2001 ◽  
Vol 10 (3) ◽  
pp. 312-330 ◽  
Author(s):  
Bernard Harper ◽  
Richard Latto

Stereo scene capture and generation is an important facet of presence research in that stereoscopic images have been linked to naturalness as a component of reported presence. Three-dimensional images can be captured and presented in many ways, but it is rare that the most simple and “natural” method is used: full orthostereoscopic image capture and projection. This technique mimics as closely as possible the geometry of the human visual system and uses convergent axis stereography with the cameras separated by the human interocular distance. It simulates human viewing angles, magnification, and convergences so that the point of zero disparity in the captured scene is reproduced without disparity in the display. In a series of experiments, we have used this technique to investigate body image distortion in photographic images. Three psychophysical experiments compared size, weight, or shape estimations (perceived waist-hip ratio) in 2-D and 3-D images for the human form and real or virtual abstract shapes. In all cases, there was a relative slimming effect of binocular disparity. A well-known photographic distortion is the perspective flattening effect of telephoto lenses. A fourth psychophysical experiment using photographic portraits taken at different distances found a fattening effect with telephoto lenses and a slimming effect with wide-angle lenses. We conclude that, where possible, photographic inputs to the visual system should allow it to generate the cyclopean point of view by which we normally see the world. This is best achieved by viewing images made with full orthostereoscopic capture and display geometry. The technique can result in more-accurate estimations of object shape or size and control of ocular suppression. These are assets that have particular utility in the generation of realistic virtual environments.


Author(s):  
Yulia Fatma ◽  
Armen Salim ◽  
Regiolina Hayami

Along with the development, the application can be used as a medium for learning. Augmented Reality is a technology that combines two-dimensional’s virtual objects and three-dimensional’s virtual objects into a real three-dimensional’s  then projecting the virtual objects in real time and simultaneously. The introduction of Solar System’s material, students are invited to get to know the planets which are directly encourage students to imagine circumtances in the Solar System. Explenational of planets form and how the planets make the revolution and rotation in books are considered less material’s explanation because its only display objects in 2D. In addition, students can not practice directly in preparing the layout of the planets in the Solar System. By applying Augmented Reality Technology, information’s learning delivery can be clarified, because in these applications are combined the real world and the virtual world. Not only display the material, the application also display images of planets in 3D animation’s objects with audio.


2021 ◽  
Vol 4 ◽  
pp. 92-104
Author(s):  
Valentin Bahatskyi ◽  
◽  
Aleksey Bahatskyi ◽  

Currently, the measurement of electrical and non-electrical quantities is performed using analog-to-digital conversion channels, which consist of analog signal conditioning circuits and analog-to-digital converters (ADC) of electrical quantities into a digital code. The paper considers the case when the defining errors of the measurement and control channel are systematic errors of the ADC. The reliability of measurements is assessed by their errors, and the reliability of control - by the likelihood of correct operation of the control device. In our opinion, evaluating the reliability of such similar processes as measurement and control using different criteria seems illogical. The aim of the work is to study the effect of systematic errors of an analog-to-digital converter on the errors of parameter control depending on the type of conformity functions and the width of the control window, as well as the choice of the resolution of the ADC for various control tasks. The paper analyzes the transfer functions of measurement and control. It is shown that they are formed using step functions. It is proposed to use not a step function as a control transfer function, but other functions of conformity to the norm, for example, a linear function or functions of higher orders. In this case, the control result is assessed not according to the criterion of the probability of correct operation, but using the control error. Analyzed from the point of view of reconfiguring the errors of the line, parabolic and state parabolic functions of the norms for the development of changes windows in control. A recommendation has been given for the selection of functions for the conformity of standards and for the distribution of analog-to-digital conversions for industrial control enterprises.


Sign in / Sign up

Export Citation Format

Share Document