scholarly journals EFFICIENT AND ACCURATE FUSION OF MASSIVE VECTOR DATA ON 3D TERRAIN

Author(s):  
Z. Liu ◽  
C. Li ◽  
Z. Zhao ◽  
D. Zhang ◽  
F. Wang ◽  
...  

<p><strong>Abstract.</strong> This paper presents a viewpoint-related fusion method of massive vector data and 3D terrain, in order to superpose the massive 2D vector data onto the undulating multi-resolution 3D terrain precisely and efficiently. First, the method establishes an adaptive hierarchical grid spatial index for vector data. It will determine the geographic spatial relationship between vector data and the tiles of 3D terrain in the visible area; secondly, this method will use the improved sub-pixel graphics engine AggExt to generate textures for vector data that has been bound to terrain tiles in real time; Finally, considering that a large amount of vector data will generate a lot of 2D textures in the computer memory, the method should release the “expired” vector textures. In this paper, in order to take into account the real-time convergence and the smooth interactivity of 3D scenes, this method will adopt a multi-threading strategy. The experimental results show that this method can realize the real-time and seamless fusion of massive vector objects on the 3D terrain, and has a high rendering frame rate. It can also reduce the aliasing produced by traditional texture-based methods and improve the quality of vector data fusion.</p>

Author(s):  
Y. Chen ◽  
L. Yan ◽  
X. Lin

Abstract. In order to quick response to the rapid changes of mobile platforms in complex situations such as speedy changing direction or camera shake, visual odometry/visual simultaneous localization and mapping (VO/VSLAM) always needs a high frame rate vision sensor. However, the high frame rate of the sensor will affect the real-time performance of the odometry. Therefore, we need to investigate how to make a balance between the frame rate and the pose quality of the sensor. In this paper, we propose an automatic key-frame method based on the improved PWC-Net for mobile platforms, which can improve the pose tracking quality of odometry, the error caused by dynamic blur and the global robustness. First, a two-step decomposition is used to calculate the change of inter-frame attitude, and then, key-frames are added by the improved PWC-Net or automatically selected based on the motion state of the vehicle predicted by pose change with a short time interval. To evaluate the method, we conduct extensive experiments on KITTI dataset based on monocular visual odometry. The results indicate that our method can keep the pose tracking quality while ensuring the real-time performance.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


In the real-time design, conceptual solving any new task is impossible without analytical reasoning of designers who interact with natural experience and its models among which important place occupies models of precedents. Moreover, the work with new tasks is a source of such useful models. The quality of applied reasoning essentially depends on the constructive use of appropriate language and its effective models. In the version of conceptual activity described in this book, the use of language means is realized as an ontological support of design thinking that is aimed at solving a new task and creating a model of corresponding precedent. The ontological support provides controlled using the lexis, extracting the questions for managing the analysis, revealing the cause-and effects regularities and achieving the sufficient understanding. Designers fulfill all these actions in interactions with the project ontology that can be developed by manual or programmed way in work with the task.


2014 ◽  
Vol 496-500 ◽  
pp. 1289-1292
Author(s):  
De Huan Tang ◽  
De Yang Luo

This paper designed a special welding machine for an aluminum cone bottom workpiece. This machine contains highly accurate positioner system, laser tracking system, and robotic welding devices. It is used to weld the transverse seams and the longitudinal seams of the workpiece. The interaction of welding robot with positioner and the real-time seam correcting can ensure high quality of welding.


ARTMargins ◽  
2017 ◽  
Vol 6 (3) ◽  
pp. 28-49
Author(s):  
Benjamin Murphy

Recorded between 1976 and 77, Juan Downey's video experiments with the Yanomami people have been widely celebrated as offering a critique of traditional anthropology through their use of feedback technology. This article argues, however, that close attention to the different feedback situations the artist constructs with the group reveal a more complex relationship between Downey and that discipline. In the enthusiasm he manifests for synchronous, closed-circuit video feedback in many of his statements about his Yanomami project, Downey in fact tacitly affirms some of the most problematic principles of traditional anthropology. In his emphasis on the real-time quality of this particular form of feedback, the artist puts forth a view of Yanomami society as itself synchronous, as a type of homeostatic, changeless system outside of historical time. As such he participates in a synchronic bias that anthropologists of his own time had begun to seriously critique. By focusing on one individual video from the Yanomami project, The Laughing Alligator of 1979, this essay argues that Downey's critical contribution to anthropological debates of his time does not come in the form of synchronous feedback, but rather through a different procedure unique to video technology based on temporal lag, delay, and spacing.


2014 ◽  
Vol 631-632 ◽  
pp. 516-520
Author(s):  
Chao Yang ◽  
Shui Yan Dai ◽  
Ling Da Wu ◽  
Rong Huan Yu

The method of view-dependent smoothly rendering of large-scale vector data based on the vector texture on virtual globe is presented. The vector texture is rasterized from the vector data based on view-dependent quadtree LOD. And the vector texture is projected on the top of the terrain. The smooth transition of multi-level texture is realized by adjusting the transparency of texture dynamically based on view range in two processes to avoid texture “popping”. In “IN” process, the texture’s alpha value increases when the view range goes up while In “OUT” process, the texture’s alpha value decreases. the vector texture buffer updating method is used to accelerate the texture fetching based on the least-recently-used algorithm. In the end, the real-time large-scale vector data rendering is implemented on virtual globe. The result shows that this method can real-time render large-scale vector data smoothly.


2017 ◽  
Author(s):  
Lars Juhl Jensen

AbstractMost BioCreative tasks to date have focused on assessing the quality of text-mining annotations in terms of precision of recall. Interoperability, speed, and stability are, however, other important factors to consider for practical applications of text mining. The new BioCreative/BeCalm TIPS task focuses purely on these. To participate in this task, I implemented a BeCalm API within the real-time tagging server also used by the Reflect and EXTRACT tools. In addition to retrieval of patent abstracts, PubMed abstracts, and Pub-Med Central open-access articles as required in the TIPS task, the BeCalm API implementation facilitates retrieval of documents from other sources specified as custom request parameters. As in earlier tests, the tagger proved to be both highly efficient and stable, being able to consistently process requests of 5000 abstracts in less than half a minute including retrieval of the document text.


Author(s):  
Muhammad Ismu Haji ◽  
Sugeng Purwantoro E.S.G.S ◽  
Satria Perdana Arifin

Using of IP addresses is currently still using IPv4. Meanwhile, the availability of the IPv4 address is gradually diminishes. IPv4 has a limited address capacity. IPv6 was developed with a capacity greater than IPv4. Connect between IPv4 and IPv6 without having to interfere with the existing infrastructure. So, methods like tunneling are needed. Tunneling builds a way that IPv4 and IPv6 can communicate. 6to4 tuning makes IPv6 able to communicate with IPv4 over IPv4 infrastructure. Real time communication is needed by internet users to be able to connect to each other. One of the real time communications is VoIP. To find out the quality of tunneling implemented on a VoIP network, it will analyze QoS such as delay, packet loss, and jitter. Delay obtained is 20,01ms for IPv4, 19,99ms for IPv6 and 20,03ms for 6to4. Packet loss obtained 0,01% for IPv4, IPv6 0,01% and 6to4 0,08%. The obtained jitter is 7,96ms for IPv4, IPv6 7.39ms, and 8,48 for 6to4. The test results show that using IPv6 gets a better QoS value than using IPv4 and 6to4 tunneling. The results using 6to4 tunneling obtained the highest QoS value between IPv4 and IPv6. Implementation using 6to4 tunneling results in high results because, IPv6 packets that are sent are wrapped into the IPv4 form to get through the IPv4 infrastructure. 


Sign in / Sign up

Export Citation Format

Share Document