scholarly journals Scene Consistency Verification Based on PatchNet

2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Jinjiang Li ◽  
Xiaoqing Guo ◽  
Zhen Hua ◽  
Zhiyong An

In the real world, the object does not exist in isolation, and it always appears in a certain scene. Usually the object is fixed in a particular scene and even in special spatial location. In this paper, we propose a method for judging scene consistency effectively. Scene semantics and geometry relation play a key role. In this paper, we use PatchNet to deal with these high-level scene structures. We construct a consistent scene database, using semantic information of PatchNet to determine whether the scene is consistent. The effectiveness of the proposed algorithm is verified by a lot of experiments.

2021 ◽  
Vol 7 ◽  
pp. e704
Author(s):  
Wei Ma ◽  
Shuai Zhang ◽  
Jincai Huang

Unlike traditional visualization methods, augmented reality (AR) inserts virtual objects and information directly into digital representations of the real world, which makes these objects and data more easily understood and interactive. The integration of AR and GIS is a promising way to display spatial information in context. However, most existing AR-GIS applications only provide local spatial information in a fixed location, which is exposed to a set of problems, limited legibility, information clutter and the incomplete spatial relationships. In addition, the indoor space structure is complex and GPS is unavailable, so that indoor AR systems are further impeded by the limited capacity of these systems to detect and display location and semantic information. To address this problem, the localization technique for tracking the camera positions was fused by Bluetooth low energy (BLE) and pedestrian dead reckoning (PDR). The multi-sensor fusion-based algorithm employs a particle filter. Based on the direction and position of the phone, the spatial information is automatically registered onto a live camera view. The proposed algorithm extracts and matches a bounding box of the indoor map to a real world scene. Finally, the indoor map and semantic information were rendered into the real world, based on the real-time computed spatial relationship between the indoor map and live camera view. Experimental results demonstrate that the average positioning error of our approach is 1.47 m, and 80% of proposed method error is within approximately 1.8 m. The positioning result can effectively support that AR and indoor map fusion technique links rich indoor spatial information to real world scenes. The method is not only suitable for traditional tasks related to indoor navigation, but it is also promising method for crowdsourcing data collection and indoor map reconstruction.


2016 ◽  
Vol 13 (6) ◽  
pp. 172988141666678
Author(s):  
Dingsheng Luo ◽  
Yaoxiang Ding ◽  
Xiaoqiang Han ◽  
Yang Ma ◽  
Yian Deng ◽  
...  

Nowadays, humanoids are increasingly expected acting in the real world to complete some high-level tasks humanly and intelligently. However, this is a hard issue due to that the real world is always extremely complicated and full of miscellaneous variations. As a consequence, for a real-world-acting robot, precisely perceiving the environmental changes might be an essential premise. Unlike human being, humanoid robot usually turns out to be with much less sensors to get enough information from the real world, which further leads the environmental perception problem to be more challenging. Although it can be tackled by establishing direct sensory mappings or adopting probabilistic filtering methods, the nonlinearity and uncertainty caused by both the complexity of the environment and the high degree of freedom of the robots will result in tough modeling difficulties. In our study, with the Gaussian process regression framework, an alternative learning approach to address such a modeling problem is proposed and discussed. Meanwhile, to debase the influence derived from limited sensors, the idea of fusing multiple sensory information is also involved. To evaluate the effectiveness, with two representative environment changing tasks, that is, suffering unknown external pushing and suddenly encountering sloped terrains, the proposed approach is applied to a humanoid, which is only equipped with a three-axis gyroscope and a three-axis accelerometer. Experimental results reveal that the proposed Gaussian process regression-based approach is effective in coping with the nonlinearity and uncertainty of the humanoid environmental perception problem. Further, a humanoid balancing controller is developed, which takes the output of the Gaussian process regression-based environmental perception as the seed to activate the corresponding balancing strategy. Both simulated and hardware experiments consistently show that our approach is valuable and leads to a good base for achieving a successful balancing controller for humanoid.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Z. Cao ◽  
M. Zheng ◽  
Y. Vorobyeva ◽  
C. Song ◽  
N. F. Johnson

Society faces a fundamental global problem of understanding which individuals are currently developing strong support for some extremist entity such as ISIS (Islamic State), even if they never end up doing anything in the real world. The importance of online connectivity in developing intent has been confirmed by recent case studies of already convicted terrorists. Here we use ideas from Complexity to identify dynamical patterns in the online trajectories that individuals take toward developing a high level of extremist support, specifically, for ISIS. Strong memory effects emerge among individuals whose transition is fastest and hence may become “out of the blue” threats in the real world. A generalization of diagrammatic expansion theory helps quantify these characteristics, including the impact of changes in geographical location, and can facilitate prediction of future risks. By quantifying the trajectories that individuals follow on their journey toward expressing high levels of pro-ISIS support—irrespective of whether they then carry out a real-world attack or not—our findings can help move safety debates beyond reliance on static watch-list identifiers such as ethnic background or immigration status and/or postfact interviews with already convicted individuals. Given the broad commonality of social media platforms, our results likely apply quite generally; for example, even on Telegram where (like Twitter) there is no built-in group feature as in our study, individuals tend to collectively build and pass through the so-called super-group accounts.


2017 ◽  
Vol 61 (4) ◽  
pp. 103-123
Author(s):  
Andrzej Zybała

The author defines intellectual culture as a tendency to base decisions on objective analyses or the habit of investigating issues analytically. In the broader sense, intellectual culture may be considered to be the way the collective reacts to phenomena that appear in the real world. A high level of intellectual culture, in the author’s opinion, is shown by a modern form of thinking manifested in the ability to make use of abstracts and to take into account alternative systems of constructing opinions. On the basis of selected analyses of Polish scholars the author advances the hypothesis that Poland has failed to form proper institutional mechanisms favoring rational analysis in public life. The author demonstrates that this is the result of many factors, such as the long-lasting model of Sarmatian customs (including its providentialism), the strong and lasting influence of a radical form of romanticism, and also the nugatory influence of Enlightenment and positivist models. These factors have been accompanied by the unsuitability of educational and scholarly institutions, the delayed development of modern forms of economics, which force the use of rational calculations, and a structure of society that does not favor exchanges of ideas and deliberation.


2019 ◽  
pp. 123-143
Author(s):  
Lisa M. Oakes ◽  
David H. Rakison

Chapter 6 illustrates how the developmental cascade framework can be used to understand the development of looking behavior in infancy. Historically, researchers have focused on one cue, feature, or mechanism to explain infants’ looking behavior in a variety of contexts, including experimental paradigms designed to assess high-level conceptual understanding. In this chapter, the authors argue that a cascade approach can provide a deeper understanding of the development of looking behavior both in the laboratory setting and in the real world. Three examples are presented that illustrate how a single behavior—attending to one’s mother, to an event, or to a novel stimulus—reflects multiple processes, and developmental change in this behavior reflects mechanisms that occur at multiple levels.


2017 ◽  
Vol 28 (1) ◽  
pp. 18-30 ◽  
Author(s):  
Veda C. Storey

Domain ontologies and conceptual models similarly capture and represent concepts from the real world for inclusion in an information system. This paper examines challenges of conceptual modeling and domain ontology development when mapping to high-level ontologies. The intent is to reconcile apparent differences and position some of the inherent challenges in these closely-coupled areas of research, while providing insights into recognizing and resolving modeling difficulties.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4456 ◽  
Author(s):  
Park ◽  
Wen ◽  
Sung ◽  
Cho

Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle.


1993 ◽  
Vol 67 (4) ◽  
pp. 686-686
Author(s):  
Donald L. Wolberg

The responsibilities of Secretary of this Society bring with them an opportunity to interact with the membership and the public. It is a job that offers a unique perspective on the profession and a general perception of the profession. There is no doubt but that a high level of interest in paleontology is present “out there” in the real world and people are interested in fossils. I continue to receive many, many requests for our educational brochures and there has been a surprising number of requests for membership information in the Society by people for whom paleontology is an avocation.


2007 ◽  
Vol 16 (3) ◽  
pp. 318-332 ◽  
Author(s):  
George Drettakis ◽  
Maria Roussou ◽  
Alex Reche ◽  
Nicolas Tsingos

In this paper we present a user-centered design approach to the development of a Virtual Environment (VE), by utilizing an iterative, user-informed process throughout the entire design and development cycle. A preliminary survey was first undertaken with end users, that is, architects, chief engineers, and decision makers of a real-world architectural and urban planning project, followed by a study of the traditional workflow employed. We then determined the elements required to make the VE useful in the real-world setting, choosing appropriate graphical and auditory techniques to develop audiovisual VEs with a high level of realism. Our user-centered design approach guided the development of an appropriate interface and an evaluation methodology to test the overall usability of the system. The VE was evaluated both in the laboratory and, most importantly, in the users' natural work environments. In this study we present the choices we made as part of the design and evaluation methodologies employed, which successfully combined research goals with those of a real-world project. Among other results, this evaluation suggests that involving users and designers from the beginning improves the effectiveness of the VE in the context of the real world urban planning project. Furthermore, it demonstrates that appropriate levels of realism, in particular spatialized 3D sound, high-detail vegetation, and shadows, as well as the presence of rendered crowds, are significant for the design process and for communicating about designs; they enable better appreciation of overall ambience of the VE, perception of space and physical objects, as well as the sense of scale. We believe this study is of interest to VE researchers, designers, and practitioners, as well as professionals interested in using VR in their workplace.


Author(s):  
Oksana Elkhova ◽  

This article provides a philosophical justification for the concept of virtuality index (VR Index). The use of the index method is the novelty of this research and allows us to consider virtual reality from a new methodological perspective. In the study, VR Index is schematized: in the author’s opinion, it acts as a certain generalized relative indicator that serves to characterize changes in such a phenomenon as virtual reality. The basic components of VR Index are distinguished: immersion, involvement, and interactivity. They can be represented in quantitative and qualitative terms. VR Index can be schematically presented in the following way: VR Index = Im·Inv·Int (where Im – immersion, Inv – involvement, Int – interactivity). For each specific case, the above pattern takes the following form: VR Index = Imm·Invn·Intp (where the coefficients m, n, p > 0). Immersion characterizes the coverage of senses of a person in an artificially created environment. Involvement indicates the rational and the emotional components of a person’s mental sphere. Interactivity, in its turn, determines the user’s interaction with the virtual environment. Each of these components affects the value of VR Index. The author distinguishes two extreme cases: virtual realities with low and high VR Index. Virtual realities with low VR Index involve two main channels of human perception, i.e. vision and hearing, are characterized by minimal user involvement and weak interactivity; the users are well aware of the fact that they are interacting with a simulation of the real world. Virtual realities with high VR Index cover a large number of channels of human perception and have a high level of user involvement and interactivity; for the user, the events of the real and virtual worlds become indistinguishable from each other.


Sign in / Sign up

Export Citation Format

Share Document