Beyond Data: Handling Spatial and Analytical Contexts with Genetics-Based Machine Learning

Author(s):  
Catherine Dibble

Geographic information systems (GISs) are fairly good at handling three types of data: locational, attribute, and topological. Recent work holds promise for adding temporal data to this list as well (e.g., see Langran, 1992). Yet the unprecedentedly vast resources of geographically referenced data continue to outstrip our ability to derive meaningful information from such databases, despite dramatic improvements in computer processing power, algorithm efficiency, and parallel processing. In part this is because such research has emphasized improvements in processing efficiency rather than effectiveness. We humans are slow-minded compared with our silicon inventions; yet our analytical capabilities remain far more powerful, primarily because we have evolved elaborate cognitive infrastructures devoted to ensuring that we leverage our limited processing power by focusing our attention on the events and information most likely to be relevant. In GIS use, so far only human perception provides the requisite integration of spatial context, and human attention directs the determination of relevance and the selection of geographic features and related analyses. Understanding of spatial context and analytical purpose exists only in the minds of humans working with the GIS or viewing the displays and maps created by such operations. We still extract information from our geographic data systems primarily through long series of relatively tedious and complex spatial operations, performed—or at least explicitly preprogrammed—by a human, in order to derive each answer. Human integration of analytical purpose and spatial and attribute contexts is perhaps the most essential and yet the most invisible component of any geographic analysis, yet it is also perhaps the most fundamental missing link in any GIS. Only humans can glance at a map of a toxic waste dumps next to school yards, or oil spills upstream from fisheries, and recognize the potential threat of such proximity; human cartographers understand the importance of emphasizing either road or stream networks depending on the purpose of a map; humans understand that “near” operates at different scales for corner stores versus cities, or tropical jungle habitat versus open savannah. Given a GIS with the capability to deluge any inquiry with myriad layers of extraneous data, this natural human ability to filter data and manipulate only the meaningful elements is essential.

1970 ◽  
Vol 108 (2) ◽  
pp. 27-30 ◽  
Author(s):  
S. Paulikas ◽  
P. Sargautis ◽  
V. Banevicius

The problem of estimation of video quality obtained by end-user for mobile video streaming is addressed. Widely spreading mobile communication systems and increasing data transmission rates expand variety of multimedia services. One of such services is video streaming. So it is important to assess quality of this service. Consumers of video streaming are humans, and quality assessment must account human perception characteristics. Existing methods for user experienced video quality estimation as quality metrics usually usebit-error rate that has low correlation with by human perceived video quality. More advanced methods usually require too much processing power that cannot be obtained in handled mobile devices or intrusion into device firmware and/or hardware to obtain required data. However, recent research shows that channels throughput dedicated to some service (e.g. video streaming) can be tied to QoS perceived by an end-user indicator. This paper presents a research on impact of wireless channel parameters such as throughput and jitter on quality of video streaming. These wireless channel parameters can be easily obtained by monitoring IP level data streams in end-user’s device by fairly simple software agent for indication of video streaming QoS. Ill. 5, bibl. 10 (in English; abstracts in English and Lithuanian).http://dx.doi.org/10.5755/j01.eee.108.2.138


2018 ◽  
Vol 10 (1) ◽  
pp. 34-42
Author(s):  
Muzafar Ahmad Bhat ◽  
Amit Jain

The Internet of Things (IoT) has arisen as a novel prospect in the recent years. This has presented the notion that all devices such as smartphones, public services, conveyance facilities, and home appliances can be viewed as data creator devices. The Cloud Computing framework for IoT highlighted in this work has potential to act as a data storage system supporting IoT devices utilized to improve data processing efficiency and offer a huge competitive advantage to the IoT applications. The purpose of this study is to examine the work done on IoT using big data as well as data mining methods to recognize focuses that must be highlighted further. Some focus is made on data mining technologies integrated with IoT technologies for decision making support and system optimization as data mining involves discovering novel, interesting, and potentially useful patterns from data and applying algorithms to the extraction of hidden information.


Informatics ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 20 ◽  
Author(s):  
Patricia Martin-Rodilla ◽  
Cesar Gonzalez-Perez

Research in the digital humanities often involves vague information, either because our objects of study lack clearly defined boundaries, or because our knowledge about them is incomplete or hypothetical, which is especially true in disciplines about our past (such as history, archaeology, and classical studies). Most techniques used to represent data vagueness emerged from natural sciences, and lack the expressiveness that would be ideal for humanistic contexts. Building on previous work, we present here a conceptual framework based on the ConML modelling language for the expression of information vagueness in digital humanities. In addition, we propose an implementation on non-relational data stores, which are becoming popular within the digital humanities. Having clear implementation guidelines allow us to employ search engines or big data systems (commonly implemented using non-relational approaches) to handle the vague aspects of information. The proposed implementation guidelines have been validated in practice, and show how we can query a vagueness-aware system without a large penalty in analytical and processing power.


2021 ◽  
Vol 30 (1) ◽  
pp. 34-42
Author(s):  
Tamara V. Dudar ◽  
Olga V. Titarenko ◽  
Alla N. Nekos ◽  
Olena V. Vysotska ◽  
Andrii P. Porvan

Some aspects of environmental hazard within uranium mining areas are considered. The uranium content in the environment components (rocks, soils, underground and surface waters) of the central part of the Ukrainian Shield within and beyond the uranium mining area is analyzed on the example of the Michurinske ore field. It is emphasized that man-made sources of natural origin should be considered more broadly than just waste dumps from uranium mining and processing enterprises. These are sources of ionizing radiation of natural origin, which have been subjected to concentration or their accessibility has been increased because of anthropogenic activity. Additional irradiation to the natural radiation background is formed. Waste dumps of uranium mining are considered as sources of potential dust pollution in the surface layers of atmosphere with fine dust containing uranium, its decay products and associated elements. The area of waste dumps is calculated using space images. Uranium accumulates in the dusty fraction, where its content is 0.01-0.06%. Taking into account the geological and geochemical characteristics of uranium deposits, radioactive elements, heavy metals and other associated elements of uranium mineralization are car- ried out of the dumps by winds and atmospheric waters with their subsequent migration into environment components. A mathematical model of potential dust air pollution in the area of long-term operation of the oldest uranium mine is presented for the summer 2019. In total, 15 factors influencing the potential threat of air dust pollution are considered and analyzed. The mathematical model is developed on the basis of the method of discriminant functions. To assess the degree of the model parameters informativeness, one-factor covariance analysis is used. It allows assessing the degree of a single sign influence on the prediction result. The developed model takes into account the area of waste dumps, uranium content in the dust fraction and wind direction southeast and/or east as the most hazardous for the study area. The model allows determining correctly the level of potential threat of air dust pollution in 96.3% ± 3.6% of all cases.


2010 ◽  
Vol 31 (3) ◽  
pp. 130-137 ◽  
Author(s):  
Hagen C. Flehmig ◽  
Michael B. Steinborn ◽  
Karl Westhoff ◽  
Robert Langner

Previous research suggests a relationship between neuroticism (N) and the speed-accuracy tradeoff in speeded performance: High-N individuals were observed performing less efficiently than low-N individuals and compensatorily overemphasizing response speed at the expense of accuracy. This study examined N-related performance differences in the serial mental addition and comparison task (SMACT) in 99 individuals, comparing several performance measures (i.e., response speed, accuracy, and variability), retest reliability, and practice effects. N was negatively correlated with mean reaction time but positively correlated with error percentage, indicating that high-N individuals tended to be faster but less accurate in their performance than low-N individuals. The strengthening of the relationship after practice demonstrated the reliability of the findings. There was, however, no relationship between N and distractibility (assessed via measures of reaction time variability). Our main findings are in line with the processing efficiency theory, extending the relationship between N and working style to sustained self-paced speeded mental addition.


2017 ◽  
Vol 131 (1) ◽  
pp. 19-29 ◽  
Author(s):  
Marianne T. E. Heberlein ◽  
Dennis C. Turner ◽  
Marta B. Manser

Sign in / Sign up

Export Citation Format

Share Document