Using deep learning for precipitation forecasting based on spatio-temporal information: a case study

2021 ◽  
Author(s):  
Weide Li ◽  
Xi Gao ◽  
Zihan Hao ◽  
Rong Sun
2018 ◽  
Vol 1 ◽  
pp. 1-3
Author(s):  
Edyta P. Bogucka ◽  
Mathias Jahnke

In this contribution, we introduce geographic concepts in the humanities and present the results of a spacetime visualization of ancient buildings over the last centuries. The techniques and approaches used were based on cartographic research to visualize spatio-temporal information. As a case study, we applied cartographic styling techniques to a model of the Royal Castle in Warsaw and its different spatial elements, which were constructed and destroyed during their eventful history. In our case, the space-time cube approach seems to be the most suitable representation of this spatio-temporal information. Therefore, we digitized the different footprints of the castle during the ancient centuries as well as the landscape structure around, and annotated them with monarchies, epochs and time. During the digitization process, we had to cope with difficulties like sources in various scales and map projections, which resulted in varying accuracies. The results were stored in KML to support a wide variety of visualization platforms.


2020 ◽  
Author(s):  
Hamidreza Bolhasani ◽  
Somayyeh Jafarali Jassbi

Abstract In recent years, deep learning has become one of the most important topics in computer sciences. Deep learning is a growing trend in the edge of technology and its applications are now seen in many aspects of our life such as object detection, speech recognition, natural language processing, etc. Currently, almost all major sciences and technologies are benefiting from the advantages of deep learning such as high accuracy, speed and flexibility. Therefore, any efforts in improving performance of related techniques is valuable. Deep learning accelerators are considered as hardware architecture, which are designed and optimized for increasing speed, efficiency and accuracy of computers that are running deep learning algorithms. In this paper, after reviewing some backgrounds on deep learning, a well-known accelerator architecture named MAERI (Multiply-Accumulate Engine with Reconfigurable interconnects) is investigated. Performance of a deep learning task is measured and compared in two different data flow strategies: NLR (No Local Reuse) and NVDLA (NVIDIA Deep Learning Accelerator), using an open source tool called MAESTRO (Modeling Accelerator Efficiency via Spatio-Temporal Resource Occupancy). Measured performance indicators of novel optimized architecture, NVDLA shows higher L1 and L2 computation reuse, and lower total runtime (cycles) in comparison to the other one.


2020 ◽  
Author(s):  
Hamidreza Bolhasani ◽  
Somayyeh Jafarali Jassbi

Abstract In recent years, deep learning has become one of the most important topics in computer sciences. Deep learning is a growing trend in the edge of technology and its applications are now seen in many aspects of our life such as object detection, speech recognition, natural language processing, etc. Currently, almost all major sciences and technologies are benefiting from the advantages of deep learning such as high accuracy, speed and flexibility. Therefore, any efforts in improving performance of related techniques is valuable. Deep learning accelerators are considered as hardware architecture, which are designed and optimized for increasing speed, efficiency and accuracy of computers that are running deep learning algorithms. In this paper, after reviewing some backgrounds on deep learning, a well-known accelerator architecture named MAERI (Multiply-Accumulate Engine with Reconfigurable interconnects) is investigated. Performance of a deep learning task is measured and compared in two different data flow strategies: NLR (No Local Reuse) and NVDLA (NVIDIA Deep Learning Accelerator), using an open source tool called MAESTRO (Modeling Accelerator Efficiency via Spatio-Temporal Resource Occupancy). Measured performance indicators of novel optimized architecture, NVDLA shows higher L1 and L2 computation reuse, and lower total runtime (cycles) in comparison to the other one.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Hamidreza Bolhasani ◽  
Somayyeh Jafarali Jassbi

AbstractIn recent years, deep learning has become one of the most important topics in computer sciences. Deep learning is a growing trend in the edge of technology and its applications are now seen in many aspects of our life such as object detection, speech recognition, natural language processing, etc. Currently, almost all major sciences and technologies are benefiting from the advantages of deep learning such as high accuracy, speed and flexibility. Therefore, any efforts in improving performance of related techniques is valuable. Deep learning accelerators are considered as hardware architecture, which are designed and optimized for increasing speed, efficiency and accuracy of computers that are running deep learning algorithms. In this paper, after reviewing some backgrounds on deep learning, a well-known accelerator architecture named MAERI (Multiply-Accumulate Engine with Reconfigurable interconnects) is investigated. Performance of a deep learning task is measured and compared in two different data flow strategies: NLR (No Local Reuse) and NVDLA (NVIDIA Deep Learning Accelerator), using an open source tool called MAESTRO (Modeling Accelerator Efficiency via Spatio-Temporal Resource Occupancy). Measured performance indicators of novel optimized architecture, NVDLA shows higher L1 and L2 computation reuse, and lower total runtime (cycles) in comparison to the other one.


2019 ◽  
Vol 28 (7) ◽  
pp. 1863-1883 ◽  
Author(s):  
Agustín Molina Sánchez ◽  
Patricia Delgado ◽  
Antonio González-Rodríguez ◽  
Clementina González ◽  
A. Francisco Gómez-Tagle Rojas ◽  
...  

Author(s):  
Álvaro Briz-Redón ◽  
Adina Iftimi ◽  
Juan Francisco Correcher ◽  
Jose De Andrés ◽  
Manuel Lozano ◽  
...  

GeoJournal ◽  
2021 ◽  
Author(s):  
R. Nasiri ◽  
S. Akbarpour ◽  
AR. Zali ◽  
N. Khodakarami ◽  
MH. Boochani ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document