scholarly journals Forecasting monthly numbers of hot days in Europe with a convolutional neural network

2021 ◽  
Author(s):  
Matti Kämäräinen ◽  
Kirsti Jylhä ◽  
Natalia Korhonen ◽  
Otto Hyvärinen

<p>Hot days, defined here as days exceeding the local 90th temperature percentile in summer months, pose an increasing threat to societies as summers warm along the climate. Therefore, an early warning of hot days and heat waves would be beneficial. To alleviate this need, we fit a convolutional neural network model to the global spatial distributions of the ERA5 reanalysis data to forecast the future number of hot days over the nearest 30-day period in Europe. <br><br>A large set of potential input variable candidates were explored, including variables from the stratosphere and from the surface layers. Three-fold cross-validation was used to find the optimal subset to be used in forecasting. In addition to the input variables themselves, we use their temporal differences as predictors. Stepwise backward increasing of the amount of fitting data was applied to study the sensitivity of modelling to the number of fitting years. Finally, to emulate the real forecasting, time series hindcasting was applied by fitting a new model for each forecasted year, using only years prior to each year for fitting. <br><br>The target variable – the number of hot days during the nearest month – is extremely season-dependent. The non-linear forecasting model can take this into account, and both the grid cell based numbers of hot days and especially the mean numbers inside sub-regions show that the model is capable of reproducing the numbers. The skill, measured by the anomaly correlation coefficient, increases rapidly and constantly with an increasing number of fitting years. Interestingly, the skill curve does not level out, implying the model could still be enhanced by further increasing the fitting data.</p>

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haibin Chang ◽  
Ying Cui

More and more image materials are used in various industries these days. Therefore, how to collect useful images from a large set has become an urgent priority. Convolutional neural networks (CNN) have achieved good results in certain image classification tasks, but there are still problems such as poor classification ability, low accuracy, and slow convergence speed. This article mainly introduces the image classification algorithm (ICA) research based on the multilabel learning of the improved convolutional neural network and some improvement ideas for the research of the ICA based on the multilabel learning of the convolutional neural network. This paper proposes an ICA research method based on multilabel learning of improved convolutional neural networks, including the image classification process, convolutional network algorithm, and multilabel learning algorithm. The conclusions show that the average maximum classification accuracy of the improved CNN in this paper is 90.63%, and the performance is better, which is beneficial to improving the efficiency of image classification. The improved CNN network structure has reached the highest accuracy rate of 91.47% on the CIFAR-10 data set, which is much higher than the traditional CNN algorithm.


2018 ◽  
Author(s):  
Yimeng Zhang ◽  
Tai Sing Lee ◽  
Ming Li ◽  
Fang Liu ◽  
Shiming Tang

AbstractIn this study, we evaluated the convolutional neural network (CNN) method for modeling V1 neurons of awake macaque monkeys in response to a large set of complex pattern stimuli. CNN models outperformed all the other baseline models, such as Gabor-based standard models for V1 cells and various variants of generalized linear models. We then systematically dissected different components of the CNN and found two key factors that made CNNs outperform other models: thresholding nonlinearity and convolution. In addition, we fitted our data using a pre-trained deep CNN via transfer learning. The deep CNN’s higher layers, which encode more complex patterns, outperformed lower ones, and this result was consistent with our earlier work on the complexity of V1 neural code. Our study systematically evaluates the relative merits of different CNN components in the context of V1 neuron modeling.


2020 ◽  
Vol 31 (4) ◽  
pp. 43
Author(s):  
Nuha Mohammed Khassaf ◽  
Shaimaa Hameed Shaker

At the present time, everyone is interested in dealing with images in different fields such as geographic maps, medical images, images obtaining by Camera, microscope, telescope, agricultural field photos, paintings, industrial parts drawings, space photos, etc. Content Based Image Retrieval (CBIR) is an efficient retrieval of relevant images from databases based on features extracted from the image. Follow the proposed system for retrieving images related to a query image from a large set of images, based approach to extract the texture features present in the image using statistical methods (PCA, MAD, GLCM, and Fusion) after pre-processing of images. The proposed system was trained using 1D CNN using a dataset Corel10k which widely used for experimental evaluation of CBIR performance the results of proposed system shows that the highest accuracy is 97.5% using Fusion (PCA, MAD), where the accuracy is 95% using MAD, 90% using PCA. The performance result is acceptable compared to previous work.


2022 ◽  
pp. 99-118
Author(s):  
Seema S. ◽  
Sowmya B. J. ◽  
Chandrika P. ◽  
Kumutha D. ◽  
Nikitha Krishna

Facial expression recognition (FER) is an important topic in the field of computer vision and artificial intelligence due to its potential in academic and business. The authors implement deep-learning-based FER approaches that use deep networks to allow end-to-end learning. It focuses on developing a cutting-edge hybrid deep-learning approach that combines a convolutional neural network (CNN) for the prediction and a convolutional neural network (CNN) for the classification. This chapter proposes a new methodology to analyze and implement a model to predict facial expression from a sequence of images. Considering the linguistic and psychological contemplations, an intermediary symbolic illustration is developed. Using a large set of image sequences recognition of six facial expressions is demonstrated. This analysis can fill in as a manual to novices in the field of FER, giving essential information and an overall comprehension of the most recent best in class contemplates, just as to experienced analysts searching for beneficial bearings for future work.


2021 ◽  
Author(s):  
Marina Corradini ◽  
Ian McBrearty ◽  
Claudio Satriano ◽  
Daniel Trugman ◽  
Paul Johnson ◽  
...  

The retrieval of earthquake finite-fault kinematic parameters after the occurrence of an earthquake is a crucial task in observational seismology. Routinely-used source inversion techniques are challenged by limited data coverage and computational effort, and are subject to a variety of assumptions and constraints that restrict the range of possible solutions. Back-projection (BP) imaging techniques do not need prior knowledge of the rupture extent and propagation, and can track the high-frequency (HF) radiation emitted during the rupture process. While classic source inversion methods work at lower frequencies and return an image of the slip over the fault, the BP method underlines fault areas radiating HF seismic energy. HF radiation is attributed to the spatial and temporal complexity of the rupture process (e.g., slip heterogeneities, changes in rupture speed and in slip velocity). However, the quantitative link between the BP image of an earthquake and its rupture kinematics remains unclear. Our work aims at reducing the gap between the theoretical studies on the generation of HF radiation due to earthquake complexity and the observation of HF emissions in BP images. To do so, we proceed in two stages, in each case analyzing synthetic rupture scenarios where the rupture process is fully known. We first investigate the influence that spatial heterogeneities in slip and rupture velocity have on the rupture process and its radiated wave field using the BP technique. We simulate different rupture processes using a 1D line source model. For each rupture model, we calculate synthetic seismograms at three teleseismic arrays and apply the BP technique to identify the sources of HF radiation. This procedure allows us to compare the BP images with the causative rupture, and thus to interpret HF emissions in terms of along-fault variation of the three kinematic parameters controlling the synthetic model: rise time, final slip, rupture velocity. Our results show that the HF peaks retrieved from BP analysis are better associated with space-time heterogeneities of slip acceleration. We then build on these findings by testing whether one can retrieve the kinematic rupture parameters along the fault using information from the BP image alone. We apply a machine learning, convolutional neural network (CNN) approach to the BP images of a large set of simulated 1D rupture processes to assess the ability of the network to retrieve from the progression of HF emissions in space and time the kinematic parameters of the rupture. These rupture simulations include along-strike heterogeneities whose size is variable and within which the parameters of rise-time, final slip, and rupture velocity change from the surrounding rupture. We show that the CNN trained on 40,000 pairs of BP images and kinematic parameters returns excellent predictions of the rise time and the rupture velocity along the fault, as well as good predictions of the central location and length of the heterogeneous segment. Our results also show that the network is insensitive towards the final slip value, as expected from a theoretical standpoint.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document