Fusion of rain radar images and wind forecasts in adeep learning model applied to rain nowcasting

Author(s):  
Anastase Charantonis ◽  
Vincent Bouget ◽  
Dominique Béréziat ◽  
Julien Brajard ◽  
Arthur Filoche

<p>Short or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risks monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In order to determine whether using other meteorological parameters such as wind would improve forecasts, we trained a deep learning model on a fusion of rainfall radar images and wind velocity produced by a weather forecast model. The network was compared to a similar architecture trained only on radar data, to a basic persistence model and to an approach based on optical flow. Our network outperforms by 8% the F1-score calculated for the optical flow on moderate and higher rain events for forecasts at a horizon time of 30 minutes. Furthermore, it outperforms by 7% the same architecture trained using only rainfall radar images. Merging rain and wind data has also proven to stabilize the training process and enabled significant improvement especially on the difficult-to-predict high precipitation rainfalls. These results can also be found in Bouget, V., Béréziat, D., Brajard, J., Charantonis, A., & Filoche, A. (2020). Fusion of rain radar images and wind forecasts in a deep learning model applied to rain nowcasting. arXiv preprint arXiv:2012.05015</p>

2021 ◽  
Vol 13 (2) ◽  
pp. 246
Author(s):  
Vincent Bouget ◽  
Dominique Béréziat ◽  
Julien Brajard ◽  
Anastase Charantonis ◽  
Arthur Filoche

Short- or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risk monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In order to determine whether using other meteorological parameters such as wind would improve forecasts, we trained a deep learning model on a fusion of rainfall radar images and wind velocity produced by a weather forecast model. The network was compared to a similar architecture trained only on radar data, to a basic persistence model and to an approach based on optical flow. Our network outperforms by 8% the F1-score calculated for the optical flow on moderate and higher rain events for forecasts at a horizon time of 30 min. Furthermore, it outperforms by 7% the same architecture trained using only rainfall radar images. Merging rain and wind data has also proven to stabilize the training process and enabled significant improvement especially on the difficult-to-predict high precipitation rainfalls.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
C I Lee ◽  
Y R Su ◽  
C H Chen ◽  
T A Chang ◽  
E E S Kuo ◽  
...  

Abstract Study question Our Retrospective study is to investigate an end-to-end deep learning model in identifying ploidy status through raw time-lapse video. Summary answer Our deep learning model demonstrates a proof of concept and potential in recognizing the ploidy status. What is known already Since the time-lapse system has been introduced into the IVF lab, the relationship between morphogenetic and ploidy status has been often discussed. However, the result has not yet reached a united conclusion due to some limitations such as human labeling. Besides the statistical approach, deep learning models have been utilized for ploidy prediction. As such approaches are single image-based, the performance remains unpromising as previous statistical-based research. Therefore, in order to move further toward clinical application, better research design and approach are needed. Study design, size, duration A retrospective analysis of the time-lapse videos and chromosomal status from 690 biopsied blastocysts cultured in a time-lapse incubator (EmbryoScope+, Vitrolife) between January 2017 and August 2018 in the Lee Women’s Hospital were assessed. The ploidy status of the blastocyst was derived from the PGT-A using high-resolution next-generation sequencing (hr-NGS). Embryo videos were obtained after normal fertilization through the intracytoplasmic sperm injection or conventional insemination. Participants/materials, setting, methods By randomly dividing the data into 80% and 20%, we developed our deep learning model based on Two-Stream Inflated 3D ConvNets(I3D) network. This model was trained by the 80% time-lapse videos and the PGT-A result. The remaining 20% has been tested by feeding the time-lapse video as input and the PGT-A prediction as output. Ploidy status was classified as Group 1 (aneuploidy) and Group 2 (euploidy and mosaicism). Main results and the role of chance Time-lapse videos were divided into 3-time partitions: day 1, day 1 to 3, and day 1 to 5. Deep learning models have been fed by RGB and optical flow. Combining 3 different time partitions with RGB, optical flow, and fused result from RGB and optical flow, we received nine sets of test results. According to the results, the longest time partition with the fusion method has the highest AUC result as 0.74, which appeared higher than the other eight experimental settings with a maximum increase of 0.17. Limitations, reasons for caution The present study is retrospective and future prospective research would help us to identify more key factors and improve this model. In addition, expanding sample size combined with cross-centered validation will also be considered in our future approach. Wider implications of the findings Group 1 and Group 2 approach provided deselection of aneuploidy embryos, while future deep learning approaches toward high mosaicism, low mosaicism, and euploidy will be needed, in order to provide a better clinical application. Trial registration number CS18082


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2021 ◽  
Vol 296 ◽  
pp. 126564
Author(s):  
Md Alamgir Hossain ◽  
Ripon K. Chakrabortty ◽  
Sondoss Elsawah ◽  
Michael J. Ryan

Sign in / Sign up

Export Citation Format

Share Document