scholarly journals 3D Simplification Methods and Large Scale Terrain Tiling

2020 ◽  
Vol 12 (3) ◽  
pp. 437
Author(s):  
Ricard Campos ◽  
Josep Quintana ◽  
Rafael Garcia ◽  
Thierry Schmitt ◽  
George Spoelstra ◽  
...  

This paper tackles the problem of generating world-scale multi-resolution triangulated irregular networks optimized for web-based visualization. Starting with a large-scale high-resolution regularly gridded terrain, we create a pyramid of triangulated irregular networks representing distinct levels of detail, where each level of detail is composed of small tiles of a fixed size. The main contribution of this paper is to redefine three different state-of-the-art 3D simplification methods to efficiently work at the tile level, thus rendering the process highly parallelizable. These modifications focus on the restriction of maintaining the vertices on the border edges of a tile that is coincident with its neighbors, at the same level of detail. We define these restrictions on the three different types of simplification algorithms (greedy insertion, edge-collapse simplification, and point set simplification); each of which imposes different assumptions on the input data. We implement at least one representative method of each type and compare both qualitatively and quantitatively on a large-scale dataset covering the European area at a resolution of 1/16 of an arc minute in the context of the European Marine Observations Data network (EMODnet) Bathymetry project. The results show that, although the simplification method designed for elevation data attains the best results in terms of mean error with respect to the original terrain, the other, more generic state-of-the-art 3D simplification techniques create a comparable error while providing different complexities for the triangle meshes.

2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Kongfan Zhu ◽  
Rundong Guo ◽  
Weifeng Hu ◽  
Zeqiang Li ◽  
Yujun Li

Legal judgment prediction (LJP), as an effective and critical application in legal assistant systems, aims to determine the judgment results according to the information based on the fact determination. In real-world scenarios, to deal with the criminal cases, judges not only take advantage of the fact description, but also consider the external information, such as the basic information of defendant and the court view. However, most existing works take the fact description as the sole input for LJP and ignore the external information. We propose a Transformer-Hierarchical-Attention-Multi-Extra (THME) Network to make full use of the information based on the fact determination. We conduct experiments on a real-world large-scale dataset of criminal cases in the civil law system. Experimental results show that our method outperforms state-of-the-art LJP methods on all judgment prediction tasks.


2013 ◽  
Vol 21 (1) ◽  
pp. 3-47 ◽  
Author(s):  
IDAN SZPEKTOR ◽  
HRISTO TANEV ◽  
IDO DAGAN ◽  
BONAVENTURA COPPOLA ◽  
MILEN KOUYLEKOV

AbstractEntailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Web-based extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.


Author(s):  
S Nagaraju ◽  
B. Prabhakara Reddy

Mental stress is showing harmfulness to human health leads abnormal stress in chronology with this may lose our mental health for proactive care. With recognizable pieces of proof of web-based media, individuals cannot share their everyday exercises and collaborate with companions via web-based media stages, making it happing to use online informal community information for stress identification. We find that users stress state is closely associated with thereupon of his/her friends in social media, which we employ a large-scale dataset from real-world social platforms to systematically study the relationship between users’ stress states and social interactions. We first define a gaggle of stress-related comments, images, and social attributes from various aspects, then proposed a plot. Research results saying that the proposed model can improve the detection performance. With the help of enumeration, we build an internet site for the users to spot their stress rate level and may check other related activities.


2012 ◽  
Vol 204-208 ◽  
pp. 4922-4927
Author(s):  
Zhen Pei Li ◽  
Ze Gen Wang ◽  
Hong Liang Jia

With the development of network and information technology, it is highly desirable to create 3D geoscience applications in the network environment. However, construction of web-based 3D terrain model is one of the foundation works of 3D geoscience applications in the network environment. This paper presents a new method of constructing large-scale 3D terrain model based on Extensible 3D (X3D). In this way, we first use Geospatial components of X3D to build 3D terrain model by using DEM as the data source, then we publish the 3D terrain model to the internet with a architecture of Brower/Server and plug-in technology. To promote the display performance of large-scale 3D terrain model, We presents an approach that using the Level of Detail (LOD) technology to simplify the 3D terrain model. The application results show that the method this paper presented has advantage of easy to integrate with other network system, thereby easy to realize data sharing and interoperation; the method also has virtues of fast display speed, high ability of interaction. It’s an effective method for constructing large-scale 3D terrain model in network environment.


2016 ◽  
pp. 1046-1065
Author(s):  
Byounghyun Yoo

This chapter investigates how the visualization of sensor resources on a 3D Web-based globe organized by level-of-detail can enhance search and exploration of information by easing the formulation of geospatial queries against the metadata of sensor systems. The case study provides an approach inspired by geographical mashups in which freely available functionality and data are flexibly combined. The authors use PostgreSQL, PostGIS, PHP, X3D-Earth, and X3DOM to allow the Web3D standard and its geospatial component to be used for visual exploration and level-of-detail control of a dynamic scene. The proposed approach facilitates the dynamic exploration of the Sensor Web and allows the user to seamlessly focus in on a particular sensor system from a set of registered sensor networks deployed across the globe. In this chapter, the authors present a prototype metadata exploration system featuring levels-of-detail for a multi-scaled Sensor Web and use it to visually explore sensor data of weather stations.


2019 ◽  
Vol 66 ◽  
pp. 243-278
Author(s):  
Shashi Narayan ◽  
Shay B. Cohen ◽  
Mirella Lapata

We introduce "extreme summarization," a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question "What is the article about?". We argue that extreme summarization, by nature, is not amenable to extractive strategies and requires an abstractive modeling approach. In the hope of driving research on this task further: (a) we collect a real-world, large scale dataset by harvesting online articles from the British Broadcasting Corporation (BBC); and (b) propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures long-range dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans on the extreme summarization dataset.


Author(s):  
Zhou Zhao ◽  
Lingtao Meng ◽  
Jun Xiao ◽  
Min Yang ◽  
Fei Wu ◽  
...  

Retweet prediction is a challenging problem in social media sites (SMS). In this paper, we study the problem of image retweet prediction in social media, which predicts the image sharing behavior that the user reposts the image tweets from their followees. Unlike previous studies, we learn user preference ranking model from their past retweeted image tweets in SMS. We first propose heterogeneous image retweet modeling network (IRM) that exploits users' past retweeted image tweets with associated contexts, their following relations in SMS and preference of their followees. We then develop a novel attentional multi-faceted ranking network learning framework with multi-modal neural networks for the proposed heterogenous IRM network to learn the joint image tweet representations and user preference representations for prediction task. The extensive experiments on a large-scale dataset from Twitter site shows that our method achieves better performance than other state-of-the-art solutions to the problem.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2040 ◽  
Author(s):  
Antoine d’Acremont ◽  
Ronan Fablet ◽  
Alexandre Baussard ◽  
Guillaume Quin

Convolutional neural networks (CNNs) have rapidly become the state-of-the-art models for image classification applications. They usually require large groundtruthed datasets for training. Here, we address object identification and recognition in the wild for infrared (IR) imaging in defense applications, where no such large-scale dataset is available. With a focus on robustness issues, especially viewpoint invariance, we introduce a compact and fully convolutional CNN architecture with global average pooling. We show that this model trained from realistic simulation datasets reaches a state-of-the-art performance compared with other CNNs with no data augmentation and fine-tuning steps. We also demonstrate a significant improvement in the robustness to viewpoint changes with respect to an operational support vector machine (SVM)-based scheme.


Sign in / Sign up

Export Citation Format

Share Document