scholarly journals A redesign of OGC Symbology Encoding standard for sharing cartography

Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) shall be applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.

2017 ◽  
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) shall be applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2017 ◽  
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) shall be applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2016 ◽  
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) have to be applied. To improve this situation, this paper does focus on improvements at the (inter)operability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic design knowledge at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to host upcoming cartographic requirements. Thus, we start by defining the main portrayal interoperability use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we paint a set of recommendations to overcome the limits so as to make these use cases a reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2018 ◽  
Vol 4 ◽  
pp. e143
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) are applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical-use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2017 ◽  
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) shall be applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2011 ◽  
Vol 2011 ◽  
pp. 1-13 ◽  
Author(s):  
Claire L. Donohoe ◽  
Aoife M. Ryan ◽  
John V. Reynolds

Cachexia is a multifactorial process of skeletal muscle and adipose tissue atrophy resulting in progressive weight loss. It is associated with poor quality of life, poor physical function, and poor prognosis in cancer patients. It involves multiple pathways: procachectic and proinflammatory signals from tumour cells, systemic inflammation in the host, and widespread metabolic changes (increased resting energy expenditure and alterations in metabolism of protein, fat, and carbohydrate). Whether it is primarily driven by the tumour or as a result of the host response to the tumour has yet to be fully elucidated. Cachexia is compounded by anorexia and the relationship between these two entities has not been clarified fully. Inconsistencies in the definition of cachexia have limited the epidemiological characterisation of the condition and there has been slow progress in identifying therapeutic agents and trialling them in the clinical setting. Understanding the complex interplay of tumour and host factors will uncover new therapeutic targets.


2021 ◽  
Vol 6 (2(62)) ◽  
pp. 30-36
Author(s):  
Volodymyr Kvasnikov ◽  
Dmytro Ornatskyi ◽  
Valerii Dostavalov

The object of research is to refine the linear sizes of the obtained 3D models via scanning, and reducing the numbers of errors when obtaining the model. For now, there is no accuracy method for transferring the actual sizes of an object to a 3D model. One of the most problematic places in the existing methods of transferring sizes from the object to the model is the error in the placement of dimensional markers due to inaccuracy, or poor quality of the received surface via scanning. A model of the instrument complex is used to implement an improved method of 3D scanning, based on the photogrammetric method. The advanced technology of construction and measurement of 3D models on the basis of photos on the principle of stereo pairs in combination with image projection is based on a combination of existing scanning methods. As well as the introduction of new functionalities, such as maintaining the actual sizes of an object, its textures, color and light characteristics, as well as improving the accuracy of linear sizes. As a result of the use of a standard, reference projections, and a new method of comparing photographs to build a 3D model, a 60 % increase in the accuracy of linear dimensions was achieved. This is due to the fact that the proposed new combined method incorporates all the existing most important aspects of scanning. And also has a number of features, such as the definition of boundary surfaces, automatic sizing, detection, and processing of glass and mirror surfaces. Due to this, this method eliminates the main disadvantages of the usual photogrammetric method – inaccuracies in the surface quality of the models, and inaccuracies in the transfer of linear dimensions. It is estimated that the combined method will allow to transfer the real size of simple objects in 3D with an accuracy of 99.97 % of the actual size of the object. It will also improve the quality of complex surfaces (boundary, glass, mirror) by at least 40–60 %, compared to other existing methods.


2014 ◽  
Vol 7 (4) ◽  
pp. 506-523 ◽  
Author(s):  
Andrea Ciaramella ◽  
Alberto Celani

Purpose – The aim of the article is to identify the limitations and critical issues in the way information in the real estate sector in Italy is currently managed, and propose the principles of a method that would provide information and comparison of the phenomenon of over-supply and non-rational land use. This study is based on a series of assumptions, the first of which is a definition of “unsold”, deemed to mean “the amount of new housing units neither occupied nor sold nor rented”. In effect, unsold stock can be considered as over-supply of construction. Design/methodology/approach – The article identifies the critical aspects in the determination of unsold real estate in Italy, starting from the available data and research already carried out; the results are often contradictory. The comparison with programming systems of building production adopted in other countries allows identification of the guidelines that can be used to better understand and combat the phenomenon. Findings – The assessment of the state -of-the-art provides a clear picture of the shortcomings and potential of the tools used to date to meet the need of studying a complex phenomenon with many obscure points. Following the empirical analysis comes out a picture of inefficiencies due to the poor quality of information, as well as the reluctance of data-sharing and -integration procedures by the institutional and market players. Research limitations/implications – The research produces solutions addressed to the Italian situation, but it identifies systems and methods used in other countries. Practical implications – The article suggests the collection systems and management information that can be used for a more accurate knowledge of unsold real estate. Social implications – The article focuses on some of the limits of the Italian real estate market, highlighting the need for greater transparency and how this can contribute to a more conscious approach to the market. Originality/value – The article seeks to provide the necessary answers to those who must understand the reasons of harmful effects for the market, such as overproduction; besides some models focused on three areas – the procedures, the organization and the market – are also proposed.


2015 ◽  
Vol 12 (3) ◽  
pp. 51-61
Author(s):  
N M Nenasheva

The proportion of patients with controlled asthma has increased over the last 15 years, however, still there is a significant proportion of patients who do not achieve control of the disease, and therefore have a high risk of exacerbations, hospitalizations, and poor quality of life. Patients with severe asthma, for which there were limited additional pharmacotherapy are the major problem. For the first time in recent years in the treatment of asthma a new class of drugs appeared: longacting anticholinergics - tiotropium which had been entered in stepwise therapy of asthma by GINA 2015. The definition of severe asthma, the role of the cholinergic nervous system in bronchial asthma, mechanism of action and clinical efficacy of tiotropium in severe bronchial asthma adults are described in the article.


2019 ◽  
Vol 43 (6) ◽  
pp. 1021-1029 ◽  
Author(s):  
S.V. Eremeev ◽  
D.E. Andrianov ◽  
V.S. Titov

A problem of automatic comparison of spatial objects on maps with different scales for the same locality is considered in the article. It is proposed that this problem should be solved using methods of topological data analysis. The initial data of the algorithm are spatial objects that can be obtained from maps with different scales and subjected to deformations and distortions. Persistent homology allows us to identify the general structure of such objects in the form of topological features. The main topological features in the study are the connectivity components and holes in objects. The paper gives a mathematical description of the persistent homology method for representing spatial objects. A definition of a barcode for spatial data, which contains a description of the object in the form of topological features is given. An algorithm for comparing feature barcodes was developed. It allows us to find the general structure of objects. The algorithm is based on the analysis of data from the barcode. An index of objects similarity in terms of topological features is introduced. Results of the research of the algorithm for comparing maps of natural and municipal objects with different scales, generalization and deformation are shown. The experiments confirm the high quality of the proposed algorithm. The percentage of similarity in the comparison of natural objects, while taking into account the scale and deformation, is in the range from 85 to 92, and for municipal objects, after stretching and distortion of their parts, was from 74 to 87. Advantages of the proposed approach over analogues for the comparison of objects with significant deformation at different scales and after distortion are demonstrated.


Sign in / Sign up

Export Citation Format

Share Document