PM2.5 concentrations forecasting in Beijing through deep learning with different inputs, model structures and forecast time

2021 ◽  
pp. 101168
Author(s):  
Jie Yang ◽  
Rui Yan ◽  
Mingyue Nong ◽  
Jiaqiang Liao ◽  
Feipeng Li ◽  
...  
Author(s):  
Zhaoliang He ◽  
Hongshan Li ◽  
Zhi Wang ◽  
Shutao Xia ◽  
Wenwu Zhu

With the growth of computer vision-based applications, an explosive amount of images have been uploaded to cloud servers that host such online computer vision algorithms, usually in the form of deep learning models. JPEG has been used as the de facto compression and encapsulation method for images. However, standard JPEG configuration does not always perform well for compressing images that are to be processed by a deep learning model—for example, the standard quality level of JPEG leads to 50% of size overhead (compared with the best quality level selection) on ImageNet under the same inference accuracy in popular computer vision models (e.g., InceptionNet and ResNet). Knowing this, designing a better JPEG configuration for online computer vision-based services is still extremely challenging. First, cloud-based computer vision models are usually a black box to end-users; thus, it is challenging to design JPEG configuration without knowing their model structures. Second, the “optimal” JPEG configuration is not fixed; instead, it is determined by confounding factors, including the characteristics of the input images and the model, the expected accuracy and image size, and so forth. In this article, we propose a reinforcement learning (RL)-based adaptive JPEG configuration framework, AdaCompress. In particular, we design an edge (i.e., user-side) RL agent that learns the optimal compression quality level to achieve an expected inference accuracy and upload image size, only from the online inference results, without knowing details of the model structures. Furthermore, we design an explore-exploit mechanism to let the framework fast switch an agent when it detects a performance degradation, mainly due to the input change (e.g., images captured across daytime and night). Our evaluation experiments using real-world online computer vision-based APIs from Amazon Rekognition, Face++, and Baidu Vision show that our approach outperforms existing baselines by reducing the size of images by one-half to one-third while the overall classification accuracy only decreases slightly. Meanwhile, AdaCompress adaptively re-trains or re-loads the RL agent promptly to maintain the performance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Aziza Alzadjali ◽  
Mohammed H. Alali ◽  
Arun Narenthiran Veeranampalayam Sivakumar ◽  
Jitender S. Deogun ◽  
Stephen Scott ◽  
...  

The timing of flowering plays a critical role in determining the productivity of agricultural crops. If the crops flower too early, the crop would mature before the end of the growing season, losing the opportunity to capture and use large amounts of light energy. If the crops flower too late, the crop may be killed by the change of seasons before it is ready to harvest. Maize flowering is one of the most important periods where even small amounts of stress can significantly alter yield. In this work, we developed and compared two methods for automatic tassel detection based on the imagery collected from an unmanned aerial vehicle, using deep learning models. The first approach was a customized framework for tassel detection based on convolutional neural network (TD-CNN). The other method was a state-of-the-art object detection technique of the faster region-based CNN (Faster R-CNN), serving as baseline detection accuracy. The evaluation criteria for tassel detection were customized to correctly reflect the needs of tassel detection in an agricultural setting. Although detecting thin tassels in the aerial imagery is challenging, our results showed promising accuracy: the TD-CNN had an F1 score of 95.9% and the Faster R-CNN had 97.9% F1 score. More CNN-based model structures can be investigated in the future for improved accuracy, speed, and generalizability on aerial-based tassel detection.


SPE Journal ◽  
2021 ◽  
pp. 1-27
Author(s):  
Zhi Zhong ◽  
Alexander Y. Sun ◽  
Bo Ren ◽  
Yanyong Wang

Summary This paper presents a deep-learning-based proxy modeling approach to efficiently forecast reservoir pressure and fluid saturation in heterogeneous reservoirs during waterflooding. The proxy model is built on a recently developed deep-learning framework, the coupled generative adversarial network (Co-GAN), to learn the joint distribution of multidomain high-dimensional image data. In our formulation, the inputs include reservoir static properties (permeability), injection rates, and forecast time, while the outputs include the reservoir dynamic states (i.e., reservoir pressure and fluid saturation) corresponding to the forecast time. Training data obtained from full-scale numerical reservoir simulations were used to train the Co-GAN proxy model, and then testing data were used to evaluate the accuracy and generalization ability of the trained model. Results indicate that the Co-GAN proxy model can predict the reservoir pressure and fluid saturation with high accuracy, which in turn, enable accurate predictions of well production rates. Moreover, the Co-GAN proxy model also is robust in extrapolating dynamic reservoir states. The deep-learning proxy models developed in this work provide a new and fast alternative to estimating reservoir production in real time.


Author(s):  
C.L. Woodcock ◽  
R.A. Horowitz ◽  
D. P. Bazett-Jones ◽  
A.L. Olins

In the eukaryotic nucleus, DNA is packaged into nucleosomes, and the nucleosome chain folded into ‘30nm’ chromatin fibers. A number of different model structures, each with a specific location of nucleosomal and linker DNA have been proposed for the arrangment of nucleosomes within the fiber. We are exploring two strategies for testing the models by localizing DNA within chromatin: electron spectroscopic imaging (ESI) of phosphorus atoms, and osmium ammine (OSAM) staining, a method based on the DNA-specific Feulgen reaction.Sperm were obtained from Patiria miniata (starfish), fixed in 2% GA in 150mM NaCl, 15mM HEPES pH 8.0, and embedded In Lowiciyl K11M at -55C. For OSAM staining, sections 100nm to 150nm thick were treated as described, and stereo pairs recorded at 40,000x and 100KV using a Philips CM10 TEM. (The new osmium ammine-B stain is available from Polysciences Inc). Uranyl-lead (U-Pb) staining was as described. ESI was carried out on unstained, very thin (<30 nm) beveled sections at 80KV using a Zeiss EM902. Images were recorded at 20,000x and 30,000x with median energy losses of 110eV, 120eV and 160eV, and a window of 20eV.


Author(s):  
Stellan Ohlsson
Keyword(s):  

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
L Pennig ◽  
L Lourenco Caldeira ◽  
C Hoyer ◽  
L Görtz ◽  
R Shahzad ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document