Data Sources, Management, and Presentation

Author(s):  
Troy L. Holcolmbe ◽  
Carla J. Moore

In the previous chapters, the various techniques for delimiting the continental shelf have been outlined. However many continental shelf claims will be developed largely on the basis of existing information. Therefore, a coastal state should begin its article 76 implementation by assembling and reviewing all available information that is relevant for determining the outer limit of the continental shelf, and for assessing the resource potential beyond 200 nautical miles (M). Data compilation activities tend to be labor-intensive, and the amount of time needed for their successful execution depends to a large extent upon the quantity and condition of the data sets, the skill and experience of the compilation staff, and the data-handling facilities at their disposal. However, it is reasonably safe to assume that almost any compilation of existing data will be less expensive than mobilizing and executing a field program for collecting new data, so it is usually more cost-effective to begin with a compilation. Even if the data compilation operation serves primarily to demonstrate the inadequacy of existing data, it will serve a useful purpose by identifying specifically where and what kind of new information is needed. To satisfy the requirements of article 76, and to provide a foundation for an understanding of the resources within the continental shelf, we are concerned primarily with data in the fields of hydrography, geodesy, geology, geophysics, and geochemistry and their subdisciplines. Such data are usually characterized by their spatial variations, in two or three dimensions, which are of a far greater magnitude than any temporal changes, as for example in the case of gravity anomaly data. However, the temporal variation of some geoscience parameters is becoming increasingly important as an indicator of environmental change. Because of the importance of their spatial changes with respect to the delineation of the continental shelf, the traditional form of presentation of geoscience data has been as maps. Whereas maps provide an excellent visualization of the data field, they may not be sufficient to carry out the analysis needed to satisfy article 76, and increasingly, digital data, profiles, and other data forms are becoming necessary.

Author(s):  
Ron Macnab

The previous chapters have outlined the various techniques for acquiring data on the continental shelf and adjacent areas. We now need to consider how to most effectively draw those various data sets together. This chapter describes a generic procedure for determining whether a coastal State is likely to be entitled to establish a continental shelf limit beyond 200 nautical miles (M), in order to circumscribe an area where it may exercise sovereign rights over natural resources of the seabed and subsoil. In most cases, this procedure will begin with the assembly and analysis of existing information, with the objectives of determining provisionally the outer limit of the continental shelf and of assessing the long-term economic potential of seabed resources beyond 200 M. If the analysis of available information is satisfactory in all respects and justifies such action, the coastal State may proceed directly to the preparation of a claim for submission to the UN Commission on the Limits of the Continental Shelf. If, on the other hand, the result of the investigation is inconclusive or otherwise unsatisfactory on account of inaccurate or incomplete information, the coastal State may opt to acquire new information that enhances existing data holdings, and to repeat some or all of the analyses. The above steps are illustrated in the generic flow diagram of figure 16.1, outlined in table 16.1, and discussed in some detail in the remainder of this chapter. The essence of article 76 is to define a procedure whereby a coastal State with a wide continental margin may claim jurisdiction over certain resources of the seabed beyond the 200-M limit. It follows that the location of the 200-M limit should be known with a reasonable degree of reliability. It is portrayed on the official charts of many nations. However, not all of these charts are constructed at scales or projections that readily lend themselves to the visualization and analysis of information such as sounding profiles and seabed morphology that may need to be examined in conjunction with the 200-M limit. From time to time, therefore, it may be necessary to portray the 200-M limit on a chart that is custom-built, or which covers a more restricted area.


Author(s):  
Alan D. Chockie ◽  
M. Robin Graybeal ◽  
Scott D. Kulat

The risk-informed inservice inspection (RI-ISI) process provides a structured and systematic framework for allocating inspection resources in a cost-effective manner while improving plant safety. It helps focus inspections where failure mechanisms are likely to be and where enhanced inspections are warranted. To date, over eighty-five percent of US nuclear plants and a number of non-US plants have implemented, or are in the process of implementing, RI-ISI programs. Many are already involved in the periodic update of their RI-ISI program. The development of RI-ISI methodologies in the US has been a long and involved process. The risk-informed procedures and rules were developed to take full advantage of PRA data, industry and plant experiences, information on specific damage mechanisms, and other available information. An important feature of the risk-informed methodologies is the requirement to make modifications and improvements to the plant’s RI-ISI application as new information and insights become available. The nuclear industry, ASME Section XI, and the Nuclear Regulatory Commission have all worked together to take advantage of the lessons learned over the years to refine and expand the use of risk-informed methodologies. This paper examines the lessons learned and the benefits received from the application and refinement of risk-informed inservice inspection programs. Also included in the paper is a review of how the information and insights have been used to improve the risk-informed methodologies.


Water ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 758 ◽  
Author(s):  
Jia ◽  
Sitzenfrei ◽  
Rauch ◽  
Liang ◽  
Liu

The development of urban drainage systems is challenged by rapid urbanization; however, little attention is paid to the urban form and its effects on these systems. This study develops an integrated city-drainage model that configures typical urban forms and their associated drainage infrastructures, specifically domestic wastewater and rainwater systems, to analyze the relationship between them. Three typical types of urban forms were investigated: the square, the star, and the strip. Virtual cities were designed first, with the corresponding drainage systems generated automatically and then linked to a model herein called the Storm Water Management Model (SWMM). Evaluation was based on 200 random configurations of wastewater/rainwater systems with different structures or attributes. The results show that urban forms play more important roles on three dimensions of performance, namely economic efficiency, effectiveness, and adaptability, of the rainwater systems than of the wastewater systems. Cost is positively correlated to the effectiveness of rainwater systems among the different urban forms, while adaptability is negatively correlated to the other two performance dimensions. Regardless of the form, it is difficult for a city to make its drainage systems simultaneously cost-effective, efficient, and adaptable based on the virtual cities we investigated. This study could inspire the urban planning of both built-up and to-be-built areas to become more sustainable with their drainage infrastructure by recognizing the pros and cons of different macroscale urban forms.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Eleanor F. Miller ◽  
Andrea Manica

Abstract Background Today an unprecedented amount of genetic sequence data is stored in publicly available repositories. For decades now, mitochondrial DNA (mtDNA) has been the workhorse of genetic studies, and as a result, there is a large volume of mtDNA data available in these repositories for a wide range of species. Indeed, whilst whole genome sequencing is an exciting prospect for the future, for most non-model organisms’ classical markers such as mtDNA remain widely used. By compiling existing data from multiple original studies, it is possible to build powerful new datasets capable of exploring many questions in ecology, evolution and conservation biology. One key question that these data can help inform is what happened in a species’ demographic past. However, compiling data in this manner is not trivial, there are many complexities associated with data extraction, data quality and data handling. Results Here we present the mtDNAcombine package, a collection of tools developed to manage some of the major decisions associated with handling multi-study sequence data with a particular focus on preparing sequence data for Bayesian skyline plot demographic reconstructions. Conclusions There is now more genetic information available than ever before and large meta-data sets offer great opportunities to explore new and exciting avenues of research. However, compiling multi-study datasets still remains a technically challenging prospect. The mtDNAcombine package provides a pipeline to streamline the process of downloading, curating, and analysing sequence data, guiding the process of compiling data sets from the online database GenBank.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


Author(s):  
Ned Augenblick ◽  
Matthew Rabin

Abstract When a Bayesian learns new information and changes her beliefs, she must on average become concomitantly more certain about the state of the world. Consequently, it is rare for a Bayesian to frequently shift beliefs substantially while remaining relatively uncertain, or, conversely, become very confident with relatively little belief movement. We formalize this intuition by developing specific measures of movement and uncertainty reduction given a Bayesian’s changing beliefs over time, showing that these measures are equal in expectation and creating consequent statistical tests for Bayesianess. We then show connections between these two core concepts and four common psychological biases, suggesting that the test might be particularly good at detecting these biases. We provide support for this conclusion by simulating the performance of our test and other martingale tests. Finally, we apply our test to data sets of individual, algorithmic, and market beliefs.


1987 ◽  
Vol 65 (11) ◽  
pp. 2822-2824 ◽  
Author(s):  
W. A. Montevecchi ◽  
J. F. Piatt

We present evidence to indicate that dehydration of prey transported by seabirds from capture sites at sea to chicks at colonies inflates estimates of wet weight energy densities. These findings and a comparison of wet and dry weight energy densities reported in the literature emphasize the importance of (i) accurate measurement of the fresh weight and water content of prey, (ii) use of dry weight energy densities in comparisons among species, seasons, and regions, and (iii) cautious interpretation and extrapolation of existing data sets.


Sign in / Sign up

Export Citation Format

Share Document