Geospatial Artificial Intelligence (GeoAI)

Geography ◽  
2021 ◽  
Author(s):  
Song Gao

Nowadays, artificial intelligence (AI) is bringing tremendous new opportunities and challenges to geospatial research. Its fast development is powered by theoretical advancement, big data, computer hardware (e.g., the graphics processing unit, or GPU), and high-performance computing platforms that support the development, training, and deployment of AI models within a reasonable amount of time. Recent years have witnessed significant advances in geospatial artificial intelligence (GeoAI), which is the integration of geospatial studies and AI, especially machine learning and deep learning methods and the latest AI technologies in both academia and industry. GeoAI can be regarded as a study subject to develop intelligent computer programs to mimic the processes of human perception, spatial reasoning, and discovery about geographical phenomena and dynamics; to advance our knowledge; and to solve problems in human environmental systems and their interactions, with a focus on spatial contexts and roots in geography or geographic information science (GIScience). Thus, it would require the knowledge of AI theory, programming and computation practices as well as geographic domain knowledge to be competent in GeoAI research. There have already been increasingly collaborative GeoAI studies for GIScience, remote sensing, physical environment, and human society. It is a good time to provide a key reference list for educators, students, researchers, and practitioners to keep up with the latest GeoAI research topics. This bibliographical entry will first review the historical roots for AI in geography and GIScience and then list up to ten selective recent works with annotations that briefly describe their importance for each topic of interest in the GeoAI landscape, ranging from fundamental spatial representation learning to spatial predictions and to various advancements in cartography, earth observation, social sensing, and geospatial semantics.

Author(s):  
Indar Sugiarto ◽  
Doddy Prayogo ◽  
Henry Palit ◽  
Felix Pasila ◽  
Resmana Lim ◽  
...  

This paper describes a prototype of a computing platform dedicated to artificial intelligence explorations. The platform, dubbed as PakCarik, is essentially a high throughput computing platform with GPU (graphics processing units) acceleration. PakCarik is an Indonesian acronym for Platform Komputasi Cerdas Ramah Industri Kreatif, which can be translated as “Creative Industry friendly Intelligence Computing Platform”. This platform aims to provide complete development and production environment for AI-based projects, especially to those that rely on machine learning and multiobjective optimization paradigms. The method for constructing PakCarik was based on a computer hardware assembling technique that uses commercial off-the-shelf hardware and was tested on several AI-related application scenarios. The testing methods in this experiment include: high-performance lapack (HPL) benchmarking, message passing interface (MPI) benchmarking, and TensorFlow (TF) benchmarking. From the experiment, the authors can observe that PakCarik's performance is quite similar to the commonly used cloud computing services such as Google Compute Engine and Amazon EC2, even though falls a bit behind the dedicated AI platform such as Nvidia DGX-1 used in the benchmarking experiment. Its maximum computing performance was measured at 326 Gflops. The authors conclude that PakCarik is ready to be deployed in real-world applications and it can be made even more powerful by adding more GPU cards in it.


2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Zhe Song ◽  
Xing Mu ◽  
Hou-Xing Zhou

The fast and accurate numerical analysis for large-scale objects and complex structures is essential to electromagnetic simulation and design. Comparing to the exploration in EM algorithms from mathematical point of view, the computer programming realization is coordinately significant while keeping up with the development of hardware architectures. Unlike the previous parallel algorithms or those implemented by means of parallel programming on multicore CPU with OpenMP or on a cluster of computers with MPI, the new type of large-scale parallel processor based on graphics processing unit (GPU) has shown impressive ability in various scenarios of supercomputing, while its application in computational electromagnetics is especially expected. This paper introduces our recent work on high performance computing based on GPU/CPU heterogeneous platform and its application to EM scattering problems and planar multilayered medium structure, including a novel realization of OpenMP-CUDA-MLFMM, a developed ACA method and a deeply optimized CG-FFT method. With fruitful numerical examples and their obvious enhancement in efficiencies, it is convincing to keep on deeply investigating and understanding the computer hardware and their operating mechanism in the future.


Author(s):  
Hiroshi Yamamoto ◽  
Yasufumi Nagai ◽  
Shinichi Kimura ◽  
Hiroshi Takahashi ◽  
Satoko Mizumoto ◽  
...  

2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


Author(s):  
Yuchen Luo ◽  
Yi Zhang ◽  
Ming Liu ◽  
Yihong Lai ◽  
Panpan Liu ◽  
...  

Abstract Background and aims Improving the rate of polyp detection is an important measure to prevent colorectal cancer (CRC). Real-time automatic polyp detection systems, through deep learning methods, can learn and perform specific endoscopic tasks previously performed by endoscopists. The purpose of this study was to explore whether a high-performance, real-time automatic polyp detection system could improve the polyp detection rate (PDR) in the actual clinical environment. Methods The selected patients underwent same-day, back-to-back colonoscopies in a random order, with either traditional colonoscopy or artificial intelligence (AI)-assisted colonoscopy performed first by different experienced endoscopists (> 3000 colonoscopies). The primary outcome was the PDR. It was registered with clinicaltrials.gov. (NCT047126265). Results In this study, we randomized 150 patients. The AI system significantly increased the PDR (34.0% vs 38.7%, p < 0.001). In addition, AI-assisted colonoscopy increased the detection of polyps smaller than 6 mm (69 vs 91, p < 0.001), but no difference was found with regard to larger lesions. Conclusions A real-time automatic polyp detection system can increase the PDR, primarily for diminutive polyps. However, a larger sample size is still needed in the follow-up study to further verify this conclusion. Trial Registration clinicaltrials.gov Identifier: NCT047126265


2021 ◽  
Vol 13 (5) ◽  
pp. 124
Author(s):  
Jiseong Son ◽  
Chul-Su Lim ◽  
Hyoung-Seop Shim ◽  
Ji-Sun Kang

Despite the development of various technologies and systems using artificial intelligence (AI) to solve problems related to disasters, difficult challenges are still being encountered. Data are the foundation to solving diverse disaster problems using AI, big data analysis, and so on. Therefore, we must focus on these various data. Disaster data depend on the domain by disaster type and include heterogeneous data and lack interoperability. In particular, in the case of open data related to disasters, there are several issues, where the source and format of data are different because various data are collected by different organizations. Moreover, the vocabularies used for each domain are inconsistent. This study proposes a knowledge graph to resolve the heterogeneity among various disaster data and provide interoperability among domains. Among disaster domains, we describe the knowledge graph for flooding disasters using Korean open datasets and cross-domain knowledge graphs. Furthermore, the proposed knowledge graph is used to assist, solve, and manage disaster problems.


2019 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Hiroshi Yamakawa

In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.


2007 ◽  
Vol 61 (1) ◽  
pp. 45-62 ◽  
Author(s):  
Hui Yu ◽  
Enrique Aguado ◽  
Gary Brodin ◽  
John Cooper ◽  
David Walsh ◽  
...  

In densely-populated cities or indoor environments, limited visibility to satellites and severe multipath effects significantly affect the accuracy and reliability of satellite-based positioning systems. To meet the needs of “seamless navigation” in these challenging environments an advanced terrestrial positioning system is under development. This system is based upon Ultra-Wideband (UWB) technology, which is a promising candidate for this application due to good time domain resolution and immunity to multipath. This paper presents a detailed analysis of two key aspects of the UWB signal design that will allow it to be used as the basis of such a high performance positioning system: the modulation scheme and the multiple access technique. These two aspects are evaluated in terms of spectral efficiency and synchronisation performance over multipath channels. Thus this paper identifies optimal modulation and multiple access techniques for a long range, high performance terrestrial positioning system using UWB.


Sign in / Sign up

Export Citation Format

Share Document