data subset
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 906 (1) ◽  
pp. 012091
Author(s):  
Petr Kalvoda ◽  
Jakub Nosek ◽  
Petra Kalvodova

Abstract Mobile mapping systems (MMS) are becoming widely used in standard geodetic tasks more commonly in the last years. The paper is focused on the influence of control points (CPs) number and configuration on mobile laser scanning accuracy. The mobile laser scanning (MLS) data was acquired by MMS RIEGL VMX-450. The resulting point cloud was compared with two different reference data sets. The first reference data set consisted of a high-accuracy test point field (TPF) measured by a Trimble R8s GNSS system and a Trimble S8 HP total station. The second reference data set was a point cloud from terrestrial laser scanning (TLS) using two Faro Focus3D X 130 laser scanners. The coordinates of both reference data sets were determined with significantly higher accuracy than the coordinates of the tested MLS point cloud. The accuracy testing is based on coordinate differences between the reference data set and the tested MLS point cloud. There is a minimum number of 6–7 CPs in our scanned area (based on MLS trajectory length) to achieve the declared relative accuracy of trajectory positioning according to the RIEGL datasheet. We tested two types of ground control point (GCP) configurations for 7 GCPs, using TPF reference data. The first type is a trajectory-based CPs configuration, and the second is a geometry-based CPs configuration. The accuracy differences of the MLS point clouds with trajectory-based CPs configuration and geometry-based CPs configuration are not statistically significant. From a practical perspective, a geometry-based CPs configuration is more advantageous in the nonlinear type of urban area such as our one. The following analyzes are performed on geometry-based CPs configuration variants. We tested the influence of changing the location of two CPs from ground to roof. The effect of the vertical configuration of the CPs on the accuracy of the tested MLS point cloud has not been demonstrated. The effect of the number of control points on the accuracy of the MLS point cloud was also tested. In the overall statistics using TPF, the accuracy increases significantly with increasing the number of GCPs up to 6. This number corresponds to a requirement of the manufacturer. Although further increasing the number of CPs does not significantly increase the global accuracy, local accuracy improves with increasing the number of CPs up to 10 (average spacing 50 m) according to the comparison with the TLS reference point cloud. The accuracy test of the MLS point cloud was divided into the horizontal accuracy test on the façade data subset and the vertical accuracy test on the road data subset using the TLS reference point cloud. The results of this paper can help improve the efficiency and accuracy of the mobile mapping process in geodetic praxis.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Siwei Wu ◽  
Shan Xiao ◽  
Yihua Di ◽  
Cheng Di

In this paper, the latest virtual reconstruction technology is used to conduct in-depth research on 3D movie animation image acquisition and feature processing. This paper firstly proposes a time-division multiplexing method based on subpixel multiplexing technology to improve the resolution of integrated imaging reconstruction images. By studying the degradation effect of the reconstruction process of the 3D integrated imaging system, it is proposed to improve the display resolution by increasing the pixel point information of fixed display array units. According to the subpixel multiplexing, an algorithm to realize the reuse of pixel point information of 3D scene element image gets the element image array with new information; then, through the high frame rate light emitting diode (LED) large screen fast output of the element image array, the human eye temporary retention effect is used, so that this group of element image array information go through a plane display, to increase the limited display array information capacity thus improving the reconstructed image. In this way, the information capacity of the finite display array is increased and the display resolution of the reconstructed image is improved. In this paper, we first use the classification algorithm to determine the gender and expression attributes of the face in the input image and filter the corresponding 3D face data subset in the database according to the gender and expression attributes, then use the sparse representation theory to filter the prototype face like the target face in the data subset, then use the filtered prototype face samples to construct the sparse deformation model, and finally use the target faces. Finally, the target 3D face is reconstructed using the feature points of the target face for model matching. The experimental results show that the algorithm reconstructs faces with high realism and accuracy, and the algorithm can reconstruct expression faces.


2021 ◽  
pp. 666-681
Author(s):  
Soumi Das ◽  
Arshdeep Singh ◽  
Saptarshi Chatterjee ◽  
Suparna Bhattacharya ◽  
Sourangshu Bhattacharya

Author(s):  
Kashyap Chitta ◽  
Jose M. Alvarez ◽  
Elmar Haussmann ◽  
Clement Farabet

2021 ◽  
Vol 251 ◽  
pp. 01047
Author(s):  
Lei Qiong

Big data has become a new factor of production in the era of digital governance. Big data has changed the thinking and mode of decision-making, enabled governance decision-making, and created a new paradigm of decision-making. However, in the actual application process, due to the limitations of data availability, time, cost, cognitive and psychological factors, it is often inconsistent with panoramic data. How to judge and solve the inconsistency between the data subset and the data set is the premise of making scientific decision. From the perspective of “consistency”, this paper constructs the basic assumptions of governance decision-making, combs and explains the solution path of consistency between the available data subset and the expected data set, expands the research perspective of measuring data similarity, and deepens the feasibility of data-driven enabling governance decision-making.


2020 ◽  
Author(s):  
Mario Locati ◽  
Francesco Mariano Mele ◽  
Vincenzo Romano ◽  
Placido Montalto ◽  
Valentino Lauciani ◽  
...  

<p>The Istituto Nazionale di Geofisica e Vulcanologia (INGV) has a long tradition of sharing scientific data, well before the Open Science paradigm was conceived. In the last thirty years, a great deal of geophysical data generated by research projects and monitoring activities were published on the Internet, though encoded in multiple formats and made accessible using various technologies.</p><p>To organise such a complex scenario, a working group (PoliDat) for implementing an institutional data policy operated from 2015 to 2018. PoliDat published three documents: in 2016, the data policy principles; in 2017, the rules for scientific publications; in 2018, the rules for scientific data management. These documents are available online in Italian, and English (https://data.ingv.it/docs/).</p><p>According to a preliminary data survey performed between 2016 and 2017, nearly 300 different types of INGV-owned data were identified. In the survey, the compilers were asked to declare all the available scientific data differentiating by the level of intellectual contribution: level 0 identifies raw data generated by fully automated procedures, level 1 identifies data products generated by semi-automated procedures, level 2 is related to data resulting from scientific investigations, and level 3 is associated to integrated data resulting from complex analysis.</p><p>A Data Management Office (DMO) was established in November 2018 to put the data policy into practice. DMO first goal was to design and establish a Data Registry aimed to satisfy the extremely differentiated requirements of both internal and external users, either at scientific or managerial levels. The Data Registry is defined as a metadata catalogue, i.e., a container of data descriptions, not the data themselves. In addition, the DMO supports other activities dealing with scientific data, such as checking contracts, providing advice to the legal office in case of litigations, interacting with the INGV Data Transparency Office, and in more general terms, supporting the adoption of the Open Science principles.</p><p>An extensive set of metadata has been identified to accommodate multiple metadata standards. At first, a preliminary set of metadata describing each dataset is compiled by the authors using a web-based interface, then the metadata are validated by the DMO, and finally, a DataCite DOI is minted for each dataset, if not already present. The Data Registry is publicly accessible via a dedicated web portal (https://data.ingv.it). A pilot phase aimed to test the Data Registry was carried out in 2019 and involved a limited number of contributors. To this aim, a top-priority data subset was identified according to the relevance of the data within the mission of INGV and the completeness of already available information. The Directors of the Departments of Earthquakes, Volcanoes, and Environment supervised the selection of the data subset.</p><p>The pilot phase helped to test and to adjust decisions made and procedures adopted during the planning phase, and allowed us to fine-tune the tools for the data management. During the next year, the Data Registry will enter its production phase and will be open to contributions from all INGV employees.</p>


Author(s):  
Umanga Bista ◽  
Alexander Mathews ◽  
Minjeong Shin ◽  
Aditya Krishna Menon ◽  
Lexing Xie

Thispaperconsidersextractivesummarisationinacomparative setting: given two or more document groups (e.g., separated by publication time), the goal is to select a small number of documents that are representative of each group, and also maximally distinguishable from other groups. We formulate a set of new objective functions for this problem that connect recent literature on document summarisation, interpretable machine learning, and data subset selection. In particular, by casting the problem as a binary classification amongst different groups, we derive objectives based on the notion of maximum mean discrepancy, as well as a simple yet effective gradient-based optimisation strategy. Our new formulation allows scalable evaluations of comparative summarisation as a classification task, both automatically and via crowd-sourcing. To this end, we evaluate comparative summarisation methods on a newly curated collection of controversial news topics over 13months.Weobserve thatgradient-based optimisationoutperforms discrete and baseline approaches in 15 out of 24 different automatic evaluation settings. In crowd-sourced evaluations, summaries from gradient optimisation elicit 7% more accurate classification from human workers than discrete optimisation. Our result contrasts with recent literature on submodular data subset selection that favours discrete optimisation. We posit that our formulation of comparative summarisation will prove useful in a diverse range of use cases such as comparing content sources, authors, related topics, or distinct view points.


2019 ◽  
Vol 30 (7) ◽  
pp. 2212-2221 ◽  
Author(s):  
Meng Fang ◽  
Tianyi Zhou ◽  
Jie Yin ◽  
Yang Wang ◽  
Dacheng Tao
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document