scholarly journals Mapping malaria by sharing spatial information between incidence and prevalence data sets

Author(s):  
Tim C. D. Lucas ◽  
Anita K. Nandi ◽  
Elisabeth G. Chestnutt ◽  
Katherine A. Twohig ◽  
Suzanne H. Keddie ◽  
...  
Author(s):  
Steve Adam

Computer hardware and software have played a significant role in supporting the design and maintenance of pipeline systems. CAD systems allowed designers and drafters to compile drawings and make edits at a pace unmatched by manual pen drawings. Although CAD continues to provide the environment for a lot of pipeline design, Geographic Information Systems (GIS) are also innovating pipeline design through routines such as automated alignment sheet generation. What we have seen over the past two or three decades is an evolution in how we manage the data and information required for decision making in pipeline design and system operation. CAD provided designers and engineers a rapid electronic method for capturing information in a drawing, editing it, and sharing it. As the amount of digital data available to users grows rapidly, CAD has been unable to adequately exploit data’s abundance and managing change in a CAD environment is cumbersome. GIS and spatial data management have proven to be the next evolution in situations where engineering, integrity, environmental, and other spatial data sets dominate the information required for design and operational decision making. It is conceivable that GIS too will crumble under the weight of its own data usage as centralized databases become larger and larger. The Geoweb is likely to emerge as the geospatial world’s evolution. The Geoweb implies the merging of spatial information with the abstract information that currently dominates the Internet. This paper and presentation will discuss this fascinating innovation, it’s force as a disruptive technology, and oil and gas applications.


2020 ◽  
Vol 12 (23) ◽  
pp. 4007
Author(s):  
Kasra Rafiezadeh Shahi ◽  
Pedram Ghamisi ◽  
Behnood Rasti ◽  
Robert Jackisch ◽  
Paul Scheunders ◽  
...  

The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms.


Author(s):  
WALDEMAR IZDEBSKI ◽  
ZBIGNIEW MALINOWSKI

The INSPIRE Directive went into force in May 2007 and it resulted in changing the way of thinking about spatial data in local government. Transposition of the Directive on Polish legislation is the Law on spatial information infrastructure from 4 March 2010., which indicates the need for computerization of spatial data sets (including land-use planning). This act resulted in an intensification of thinking about the computerization of spatial data, but, according to the authors, the needs and aspirations of the digital land-use planning crystallized already before the INSPIRE Directive and were the result of technological development and increasing the awareness of users. The authors analyze the current state of land-use planning data computerization in local governments. The analysis was conducted on a group of more than 1,700 local governments, which are users of spatial data management (GIS) technology eGmina.


Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


Author(s):  
Yun-Young Hwang Et.al

In order to make public data more useful, it is necessary to provide relevant data sets that meet the needs of users. We introduce the method of linkage between datasets. We provide a method for deriving linkages between fields of structured datasets provided by public data portals. We defined a dataset and connectivity between datasets. The connectivity between them is based on the metadata of the dataset and the linkage between the actual data field names and values. We constructed the standard field names. Based on this standard, we established the relationship between the datasets. This paper covers 31,692 structured datasets (as of May 31, 2020) among the public data portal datasets. We extracted 1,185,846 field names from over 30,000 datasets. We extracted 1,185,846 field names from over 30,000 datasets. As a result of analyzing the field names, the field names related to spatial information were the most common at 35%. This paper verified the method of deriving the relation between data sets, focusing on the field names classified as spatial information. For this reason, we have defined spatial standard field names. To derive similar field names, we extracted related field names into spaces such as locations, coordinates, addresses, and zip codes used in public datasets. The standard field name of spatial information was designed and derived 43% cooperation rate of 31,692 datasets. In the future, we plan to apply similar field names additionally to improve the data set cooperation rate of the spatial information standard.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhen Yu ◽  
Cuihuan Tian ◽  
Shiyong Ji ◽  
Benzheng Wei ◽  
Yilong Yin

Most traditional superpixel segmentation methods used binary logic to generate superpixels for natural images. When these methods are used for images with significantly fuzzy characteristics, the boundary pixels sometimes cannot be correctly classified. In order to solve this problem, this paper proposes a Superpixel Method Based on Fuzzy Theory (SMBFT), which uses fuzzy theory as a guide and traditional fuzzy c -means clustering algorithm as a baseline. This method can make full use of the advantage of the fuzzy clustering algorithm in dealing with the images with the fuzzy characteristics. Boundary pixels which have higher uncertainties can be correctly classified with maximum probability. The superpixel has homogeneous pixels. Meanwhile, the paper also uses the surrounding neighborhood pixels to constrain the spatial information, which effectively alleviates the negative effects of noise. The paper tests on the images from Berkeley database and brain MR images from the Brain web. In addition, this paper proposes a comprehensive criterion to measure the weights of two kinds of criterions in choosing superpixel methods for color images. An evaluation criterion for medical image data sets employs the internal entropy of superpixels which is inspired by the concept of entropy in the information theory. The experimental results show that this method has superiorities than traditional methods both on natural images and medical images.


Author(s):  
Qianli Ma ◽  
Lifeng Shen ◽  
Enhuan Chen ◽  
Shuai Tian ◽  
Jiabing Wang ◽  
...  

Recognizing human actions represented by 3D trajectories of skeleton joints is a challenging machine learning task. In this paper, the 3D skeleton sequences are regarded as multivariate time series, and their dynamics and multiscale features are efficiently learned from action echo states. Specifically, first the skeleton data from the limbs and trunk are projected into five high dimensional nonlinear spaces, that are randomly generated by five dynamic, training-free recurrent networks, i.e., the reservoirs of echo state networks (ESNs). In this way, the history of the time series is represented as nonlinear echo states of actions. We then use a single multiscale convolutional layer to extract multiscale features from the echo states, and maintain multiscale temporal invariance by a max-over-time pooling layer. We propose two multi-step fusion strategies to integrate the spatial information over the five parts of the human physical structure. Finally, we learn the label distribution using softmax. With one training-free recurrent layer and only layer of convolution, our Convolutional Echo State Network (ConvESN) is a very efficient end-to-end model, and achieves state-of-the-art performance on four skeleton benchmark data sets.


2018 ◽  
Vol 11 (1) ◽  
pp. 29 ◽  
Author(s):  
Xinwei Jiang ◽  
Xin Song ◽  
Yongshan Zhang ◽  
Junjun Jiang ◽  
Junbin Gao ◽  
...  

Dimensionality Reduction (DR) models are of significance to extract low-dimensional features for Hyperspectral Images (HSIs) data analysis where there exist lots of noisy and redundant spectral features. Among many DR techniques, the Graph-Embedding Discriminant Analysis framework has demonstrated its effectiveness for HSI feature reduction. Based on this framework, many representation based models are developed to learn the similarity graphs, but most of these methods ignore the spatial information, resulting in unsatisfactory performance of DR models. In this paper, we firstly propose a novel supervised DR algorithm termed Spatial-aware Collaborative Graph for Discriminant Analysis (SaCGDA) by introducing a simple but efficient spatial constraint into Collaborative Graph-based Discriminate Analysis (CGDA) which is inspired by recently developed Spatial-aware Collaborative Representation (SaCR). In order to make the representation of samples on the data manifold smoother, i.e., similar pixels share similar representations, we further add the spectral Laplacian regularization and propose the Laplacian regularized SaCGDA (LapSaCGDA), where the two spectral and spatial constraints can exploit the intrinsic geometric structures embedded in HSIs efficiently. Experiments on three HSIs data sets verify that the proposed SaCGDA and LapSaCGDA outperform other state-of-the-art methods.


2019 ◽  
Vol 11 (24) ◽  
pp. 2897 ◽  
Author(s):  
Yuhui Zheng ◽  
Feiyang Wu ◽  
Hiuk Jae Shim ◽  
Le Sun

Hyperspectral unmixing is a key preprocessing technique for hyperspectral image analysis. To further improve the unmixing performance, in this paper, a nonlocal low-rank prior associated with spatial smoothness and spectral collaborative sparsity are integrated together for unmixing the hyperspectral data. The proposed method is based on a fact that hyperspectral images have self-similarity in nonlocal sense and smoothness in local sense. To explore the spatial self-similarity, nonlocal cubic patches are grouped together to compose a low-rank matrix. Then, based on the linear mixed model framework, the nuclear norm is constrained to the abundance matrix of these similar patches to enforce low-rank property. In addition, the local spatial information and spectral characteristic are also taken into account by introducing TV regularization and collaborative sparse terms, respectively. Finally, the results of the experiments on two simulated data sets and two real data sets show that the proposed algorithm produces better performance than other state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document