scholarly journals Laboratory evaluation of a low-cost, real-time, aerosol multi-sensor

2018 ◽  
Vol 15 (7) ◽  
pp. 559-567 ◽  
Author(s):  
Robert J. Vercellino ◽  
Darrah K. Sleeth ◽  
Rodney G. Handy ◽  
Kyeong T. Min ◽  
Scott C. Collingwood
Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 270
Author(s):  
Wen-Cheng Vincent Wang ◽  
Shih-Chun Candice Lung ◽  
Chun-Hu Liu ◽  
Tzu-Yao Julia Wen ◽  
Shu-Chuan Hu ◽  
...  

Small low-cost sensing (LCS) devices enable assessment of close-to-reality PM2.5 exposures, though their data quality remains a challenge. This work evaluates the precision, accuracy, wearability and stability of a wearable particle LCS device, Location-Aware Sensing System (LASS, with Plantower PMS3003), which is 104 × 66 × 46 mm3 in size and less than 162 g in weight. Real-time particulate matter (PM) exposures in six major Asian transportation modes were assessed. Side-by-side laboratory evaluation of PM2.5 between a GRIMM aerosol spectrometer and sensors yielded a correlation of 0.98 and a mean absolute error of 0.85 µg/m3. LASS readings collected in the summer of 2016 in Taiwan were converted to GRIMM-comparable values. Mean PM2.5 concentrations obtained from GRIMM and converted LASS values of the six different transportation microenvironments were 16.9 ± 11.7 (n = 1774) and 17.0 ± 9.5 (n = 3399) µg/m3, respectively, showing a correlation of 0.93. The average one-hour PM2.5 exposure increments (concentration increase above ambient levels) from converted LASS values for Mass Rapid Transit (MRT), bus, car, scooter, bike and walk were 15.6, 6.7, −19.2, 8.1, 6.1 and 7.1 µg/m3, respectively, very close to those obtained from GRIMM. This work is one of the earliest studies applying wearable particulate matter (PM) LCS devices in exposure assessment in different transportation modes.


Author(s):  
Gabriel de Almeida Souza ◽  
Larissa Barbosa ◽  
Glênio Ramalho ◽  
Alexandre Zuquete Guarato

2007 ◽  
Author(s):  
R. E. Crosbie ◽  
J. J. Zenor ◽  
R. Bednar ◽  
D. Word ◽  
N. G. Hingorani

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yong He ◽  
Hong Zeng ◽  
Yangyang Fan ◽  
Shuaisheng Ji ◽  
Jianjian Wu

In this paper, we proposed an approach to detect oilseed rape pests based on deep learning, which improves the mean average precision (mAP) to 77.14%; the result increased by 9.7% with the original model. We adopt this model to mobile platform to let every farmer able to use this program, which will diagnose pests in real time and provide suggestions on pest controlling. We designed an oilseed rape pest imaging database with 12 typical oilseed rape pests and compared the performance of five models, SSD w/Inception is chosen as the optimal model. Moreover, for the purpose of the high mAP, we have used data augmentation (DA) and added a dropout layer. The experiments are performed on the Android application we developed, and the result shows that our approach surpasses the original model obviously and is helpful for integrated pest management. This application has improved environmental adaptability, response speed, and accuracy by contrast with the past works and has the advantage of low cost and simple operation, which are suitable for the pest monitoring mission of drones and Internet of Things (IoT).


2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


Author(s):  
Cheyma BARKA ◽  
Hanen MESSAOUDI-ABID ◽  
Houda BEN ATTIA SETTHOM ◽  
Afef BENNANI-BEN ABDELGHANI ◽  
Ilhem SLAMA-BELKHODJA ◽  
...  

Author(s):  
Donya Khaledyan ◽  
Abdolah Amirany ◽  
Kian Jafari ◽  
Mohammad Hossein Moaiyeri ◽  
Abolfazl Zargari Khuzani ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document