Interaction with geospatial data

2015 ◽  
Vol 57 (1) ◽  
Author(s):  
Johannes Schöning

AbstractMy research interest lies at the interaction between human-computer interaction (HCI) and geoinformatics. I am interested in developing new methods and novel user interfaces to navigate through spatial information. This article will give a brief overview on my past and current research topics and streams. Generally speaking, geography is playing an increasingly important role in computer science and also in the field of HCI ranging from social computing to natural user interfaces (NUIs). At the same time, research in geography has focused more and more on technology-mediated interaction with spatiotemporal phenomena. By bridging the two fields, my aim is to exploit this fruitful intersection between those two and develop, design and evaluate user interfaces that help people to solve their daily tasks more enjoyable and effectively.

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Jiangfan Feng ◽  
Yanhong Liu

Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users’ demands in a complicated environment and suggested the feasibility by the experimental results.


Author(s):  
Angelo Salatino ◽  
Francesco Osborne ◽  
Enrico Motta

AbstractClassifying scientific articles, patents, and other documents according to the relevant research topics is an important task, which enables a variety of functionalities, such as categorising documents in digital libraries, monitoring and predicting research trends, and recommending papers relevant to one or more topics. In this paper, we present the latest version of the CSO Classifier (v3.0), an unsupervised approach for automatically classifying research papers according to the Computer Science Ontology (CSO), a comprehensive taxonomy of research areas in the field of Computer Science. The CSO Classifier takes as input the textual components of a research paper (usually title, abstract, and keywords) and returns a set of research topics drawn from the ontology. This new version includes a new component for discarding outlier topics and offers improved scalability. We evaluated the CSO Classifier on a gold standard of manually annotated articles, demonstrating a significant improvement over alternative methods. We also present an overview of applications adopting the CSO Classifier and describe how it can be adapted to other fields.


2021 ◽  
Vol 1 ◽  
pp. 283-292
Author(s):  
Jakob Harlan ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractThe increased availability of affordable virtual reality hardware in the last years boosted research and development of such systems for many fields of application. While extended reality systems are well established for visualization of product data, immersive authoring tools that can create and modify that data are yet to see widespread productive use. Making use of building blocks, we see the possibility that such tools allow quick expression of spatial concepts, even for non-expert users. Optical hand-tracking technology allows the implementation of this immersive modeling using natural user interfaces. Here the users manipulated the virtual objects with their bare hands. In this work, we present a systematic collection of natural interactions suited for immersive building-block-based modeling systems. The interactions are conceptually described and categorized by the task they fulfil.


2021 ◽  
Vol 10 (7) ◽  
pp. 489
Author(s):  
Kaihua Hou ◽  
Chengqi Cheng ◽  
Bo Chen ◽  
Chi Zhang ◽  
Liesong He ◽  
...  

As the amount of collected spatial information (2D/3D) increases, the real-time processing of these massive data is among the urgent issues that need to be dealt with. Discretizing the physical earth into a digital gridded earth and assigning an integral computable code to each grid has become an effective way to accelerate real-time processing. Researchers have proposed optimization algorithms for spatial calculations in specific scenarios. However, a complete set of algorithms for real-time processing using grid coding is still lacking. To address this issue, a carefully designed, integral grid-coding algebraic operation framework for GeoSOT-3D (a multilayer latitude and longitude grid model) is proposed. By converting traditional floating-point calculations based on latitude and longitude into binary operations, the complexity of the algorithm is greatly reduced. We then present the detailed algorithms that were designed, including basic operations, vector operations, code conversion operations, spatial operations, metric operations, topological relation operations, and set operations. To verify the feasibility and efficiency of the above algorithms, we developed an experimental platform using C++ language (including major algorithms, and more algorithms may be expanded in the future). Then, we generated random data and conducted experiments. The experimental results show that the computing framework is feasible and can significantly improve the efficiency of spatial processing. The algebraic operation framework is expected to support large geospatial data retrieval and analysis, and experience a revival, on top of parallel and distributed computing, in an era of large geospatial data.


Author(s):  
Paul Green

An HFES Task Force is considering if, when, and which, HFES research publications should require the citation of relevant standards, policies, and practices to help translate research into practice. To support the Task Force activities, papers and reports are being written about how to find relevant standards produced by various organizations (e.g., the International Standards Organization, ISO) and the content of those standards. This paper describes the human-computer interaction standards being produced by ISO/IEC Joint Technical Committee 1 (Information Technology). Subcommittees 7 (Software and Systems Engineering) and 35 (User Interfaces), and Technical Committee 159, Subcommittee 4 (Ergonomics of Human-System Interaction), in particular, the contents of the ISO 9241 series and the ISO 2506x series. Also included are instructions on how to find standards using the ISO Browsing Tool and Technical Committee listings, and references to other materials on finding standards and standards-related teaching materials.


2019 ◽  
Vol 122 (1) ◽  
pp. 681-699 ◽  
Author(s):  
E. Tattershall ◽  
G. Nenadic ◽  
R. D. Stevens

AbstractResearch topics rise and fall in popularity over time, some more swiftly than others. The fastest rising topics are typically called bursts; for example “deep learning”, “internet of things” and “big data”. Being able to automatically detect and track bursty terms in the literature could give insight into how scientific thought evolves over time. In this paper, we take a trend detection algorithm from stock market analysis and apply it to over 30 years of computer science research abstracts, treating the prevalence of each term in the dataset like the price of a stock. Unlike previous work in this domain, we use the free text of abstracts and titles, resulting in a finer-grained analysis. We report a list of bursty terms, and then use historical data to build a classifier to predict whether they will rise or fall in popularity in the future, obtaining accuracy in the region of 80%. The proposed methodology can be applied to any time-ordered collection of text to yield past and present bursty terms and predict their probable fate.


Sign in / Sign up

Export Citation Format

Share Document