scholarly journals Supplementary material to "LGHAP: a Long-term Gap-free High-resolution Air Pollutants concentration dataset derived via tensor flow based multimodal data fusion"

Author(s):  
Kaixu Bai ◽  
Ke Li ◽  
Mingliang Ma ◽  
Kaitao Li ◽  
Zhengqiang Li ◽  
...  
2021 ◽  
Author(s):  
Kaixu Bai ◽  
Ke Li ◽  
Mingliang Ma ◽  
Kaitao Li ◽  
Zhengqiang Li ◽  
...  

Abstract. Developing a big data analytics framework for generating a Long-term Gap-free High-resolution Air Pollutants concentration dataset (abbreviated as LGHAP) is of great significance for environmental management and earth system science analysis. By synergistically integrating multimodal aerosol data acquired from diverse sources via a tensor flow based data fusion method, a gap-free aerosol optical depth (AOD) dataset with daily 1-km resolution covering the period of 2000–2020 in China was generated. Specifically, data gaps in daily AOD imageries from MODIS aboard Terra were reconstructed based on a set of AOD data tensors acquired from satellites, numerical analysis, and in situ air quality data via integrative efforts of spatial pattern recognition for high dimensional gridded image analysis and knowledge transfer in statistical data mining. To our knowledge, this is the first long-term gap-free high resolution AOD dataset in China, from which spatially contiguous PM2.5 and PM10 concentrations were estimated using an ensemble learning approach. Ground validation results indicate that the LGHAP AOD data are in a good agreement with in situ AOD observations from AERONET, with R of 0.91 and RMSE equaling to 0.21. Meanwhile, PM2.5 and PM10 estimations also agreed well with ground measurements, with R of 0.95 and 0.94 and RMSE of 12.03 and 19.56 μg m−3, respectively. Overall, the LGHAP provides a suite of long-term gap free gridded maps with high-resolution to better examine aerosol changes in China over the past two decades, from which three distinct variation periods of haze pollution were revealed in China. Additionally, the proportion of population exposed to unhealthy PM2.5 was increased from 50.60 % in 2000 to 63.81 % in 2014 across China, which was then drastically reduced to 34.03 % in 2020. Overall, the generated LGHAP aerosol dataset has a great potential to trigger multidisciplinary applications in earth observations, climate change, public health, ecosystem assessment, and environmental management. The daily resolution AOD, PM2.5, and PM10 datasets can be publicly accessed at https://doi.org/10.5281/zenodo.5652257 (Bai et al., 2021a), https://doi.org/10.5281/zenodo.5652265 (Bai et al., 2021b), and https://doi.org/10.5281/zenodo.5652263 (Bai et al., 2021c), respectively. Meanwhile, monthly and annual mean datasets can be found at https://doi.org/10.5281/zenodo.5655797 (Bai et al., 2021d) and https://doi.org/10.5281/zenodo.5655807 (Bai et al., 2021e), respectively. Python, Matlab, R, and IDL codes were also provided to help users read and visualize these data.


Author(s):  
Wen Qi ◽  
Hang Su ◽  
Ke Fan ◽  
Ziyang Chen ◽  
Jiehao Li ◽  
...  

The generous application of robot-assisted minimally invasive surgery (RAMIS) promotes human-machine interaction (HMI). Identifying various behaviors of doctors can enhance the RAMIS procedure for the redundant robot. It bridges intelligent robot control and activity recognition strategies in the operating room, including hand gestures and human activities. In this paper, to enhance identification in a dynamic situation, we propose a multimodal data fusion framework to provide multiple information for accuracy enhancement. Firstly, a multi-sensors based hardware structure is designed to capture varied data from various devices, including depth camera and smartphone. Furthermore, in different surgical tasks, the robot control mechanism can shift automatically. The experimental results evaluate the efficiency of developing the multimodal framework for RAMIS by comparing it with a single sensor system. Implementing the KUKA LWR4+ in a surgical robot environment indicates that the surgical robot systems can work with medical staff in the future.


2020 ◽  
Vol 64 ◽  
pp. 149-187 ◽  
Author(s):  
Yu-Dong Zhang ◽  
Zhengchao Dong ◽  
Shui-Hua Wang ◽  
Xiang Yu ◽  
Xujing Yao ◽  
...  

2018 ◽  
Author(s):  
Daniel T. McCoy ◽  
Paul R. Field ◽  
Gregory S. Elsaesser ◽  
Alejandro Bodas-Salcedo ◽  
Brian H. Kahn ◽  
...  

2016 ◽  
Vol 64 (18) ◽  
pp. 4830-4844 ◽  
Author(s):  
Rodrigo Cabral Farias ◽  
Jeremy Emile Cohen ◽  
Pierre Comon

2020 ◽  
Vol 32 (5) ◽  
pp. 829-864 ◽  
Author(s):  
Jing Gao ◽  
Peng Li ◽  
Zhikui Chen ◽  
Jianing Zhang

With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.


Sign in / Sign up

Export Citation Format

Share Document