scholarly journals Impact of Weather Predictions on COVID-19 Infection Rate by Using Deep Learning Models

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yogesh Gupta ◽  
Ghanshyam Raghuwanshi ◽  
Abdullah Ali H. Ahmadini ◽  
Utkarsh Sharma ◽  
Amit Kumar Mishra ◽  
...  

Nowadays, the whole world is facing a pandemic situation in the form of coronavirus diseases (COVID-19). In connection with the spread of COVID-19 confirmed cases and deaths, various researchers have analysed the impact of temperature and humidity on the spread of coronavirus. In this paper, a deep transfer learning-based exhaustive analysis is performed by evaluating the influence of different weather factors, including temperature, sunlight hours, and humidity. To perform all the experiments, two data sets are used: one is taken from Kaggle consists of official COVID-19 case reports and another data set is related to weather. Moreover, COVID-19 data are also tested and validated using deep transfer learning models. From the experimental results, it is shown that the temperature, the wind speed, and the sunlight hours make a significant impact on COVID-19 cases and deaths. However, it is shown that the humidity does not affect coronavirus cases significantly. It is concluded that the convolutional neural network performs better than the competitive model.

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


2021 ◽  
pp. 1-13
Author(s):  
Hailin Liu ◽  
Fangqing Gu ◽  
Zixian Lin

Transfer learning methods exploit similarities between different datasets to improve the performance of the target task by transferring knowledge from source tasks to the target task. “What to transfer” is a main research issue in transfer learning. The existing transfer learning method generally needs to acquire the shared parameters by integrating human knowledge. However, in many real applications, an understanding of which parameters can be shared is unknown beforehand. Transfer learning model is essentially a special multi-objective optimization problem. Consequently, this paper proposes a novel auto-sharing parameter technique for transfer learning based on multi-objective optimization and solves the optimization problem by using a multi-swarm particle swarm optimizer. Each task objective is simultaneously optimized by a sub-swarm. The current best particle from the sub-swarm of the target task is used to guide the search of particles of the source tasks and vice versa. The target task and source task are jointly solved by sharing the information of the best particle, which works as an inductive bias. Experiments are carried out to evaluate the proposed algorithm on several synthetic data sets and two real-world data sets of a school data set and a landmine data set, which show that the proposed algorithm is effective.


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2021 ◽  
Author(s):  
David Cotton ◽  

<p><strong>Introduction</strong></p><p>HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.</p><p>New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.</p><p>A series of case studies will assess these products in terms of their scientific impacts.</p><p>All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided</p><p> </p><p><strong>Objectives</strong></p><p>The scientific objectives of HYDROCOASTAL are to enhance our understanding  of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes</p><p>The technical objectives are to develop and evaluate  new SAR  and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.</p><p><strong>Project  Outline</strong></p><p>There are four tasks to the project</p><ul><li>Scientific Review and Requirements Consolidation: Review the current state of the art in SAR and SARin altimeter data processing as applied to the coastal zone and to inland waters</li> <li>Implementation and Validation: New processing algorithms with be implemented to generate a Test Data sets, which will be validated against models, in-situ data, and other satellite data sets. Selected algorithms will then be used to generate global coastal zone and river discharge data sets</li> <li>Impacts Assessment: The impact of these global products will be assess in a series of Case Studies</li> <li>Outreach and Roadmap: Outreach material will be prepared and distributed to engage with the wider scientific community and provide recommendations for development of future missions and future research.</li> </ul><p> </p><p><strong>Presentation</strong></p><p>The presentation will provide an overview to the project, present the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and early results from the evaluation of the initial test data set.</p><p> </p>


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3523-3526

This paper describes an efficient algorithm for classification in large data set. While many algorithms exist for classification, they are not suitable for larger contents and different data sets. For working with large data sets various ELM algorithms are available in literature. However the existing algorithms using fixed activation function and it may lead deficiency in working with large data. In this paper, we proposed novel ELM comply with sigmoid activation function. The experimental evaluations demonstrate the our ELM-S algorithm is performing better than ELM,SVM and other state of art algorithms on large data sets.


2009 ◽  
Vol 2 (1) ◽  
pp. 87-98 ◽  
Author(s):  
C. Lerot ◽  
M. Van Roozendael ◽  
J. van Geffen ◽  
J. van Gent ◽  
C. Fayt ◽  
...  

Abstract. Total O3 columns have been retrieved from six years of SCIAMACHY nadir UV radiance measurements using SDOAS, an adaptation of the GDOAS algorithm previously developed at BIRA-IASB for the GOME instrument. GDOAS and SDOAS have been implemented by the German Aerospace Center (DLR) in the version 4 of the GOME Data Processor (GDP) and in version 3 of the SCIAMACHY Ground Processor (SGP), respectively. The processors are being run at the DLR processing centre on behalf of the European Space Agency (ESA). We first focus on the description of the SDOAS algorithm with particular attention to the impact of uncertainties on the reference O3 absorption cross-sections. Second, the resulting SCIAMACHY total ozone data set is globally evaluated through large-scale comparisons with results from GOME and OMI as well as with ground-based correlative measurements. The various total ozone data sets are found to agree within 2% on average. However, a negative trend of 0.2–0.4%/year has been identified in the SCIAMACHY O3 columns; this probably originates from instrumental degradation effects that have not yet been fully characterized.


2021 ◽  
Author(s):  
Ahmed Attia ◽  
Matthew Lawrence

Abstract Distributed Fiber Optics (DFO) technology has been the new face for unconventional well diagnostics. This technology focuses on measuring Distributed Acoustic Sensing (DAS) and Distrusted Temperature Sensing (DTS) to give an in-depth understanding of well productivity pre and post stimulation. Many different completion design strategies, both on surface and downhole, are used to obtain the best fracture network outcome; however, with complex geological features, different fracture designs, and fracture driven interactions (FDIs) effecting nearby wells, it is difficult to grasp a full understanding on completion design performance for each well. Validating completion designs and improving on the learnings found in each data set should be the foundation in developing each field. Capturing a data set with strong evidence of what works and what doesn't, can help the operator make better engineering decisions to make more efficient wells as well as help gauge the spacing between each well. The focus of this paper will be on a few case studies in the Bakken which vividly show how infill wells greatly interfered with production output. A DFO deployed with a 0.6" OD, 23,000-foot-long carbon fiber rod to acquire DAS and DTS for post frac flow, completion, and interference evaluation. This paper will dive into the DFO measurements taken post frac to further explain what effects are seen on completion designs caused by interferences with infill wells; the learnings taken from the DFO post frac were applied to further escalate the understanding and awareness of how infill wells will preform on future pad sites. A showcase of three separate data sets from the Bakken will identify how effective DFO technology can be in evaluating and making informed decisions on future frac completions. In this paper we will also show and discuss how DFO can measure real time FDI events and what measures can be taken to lessen the impact on negative interference caused by infill wells.


2021 ◽  
Author(s):  
Gunta Kalvāne ◽  
Andis Kalvāns ◽  
Agrita Briede ◽  
Ilmārs Krampis ◽  
Dārta Kaupe ◽  
...  

<p>According to the Köppen climate classification, almost the entire area of Latvia belongs to the same climate type, Dfb, which is characterized by humid continental climates with warm (sometimes hot) summers and cold winters.  In the last decades whether conditions on the western coast of Latvia more characterized by temperate maritime climates. In this area there has been a transition (and still ongoing) to the climate type Cfb.</p><p>Temporal and spatial changes of temperature and precipitation regime have been examined in whole territory to identify the breaking point of climate type shifts. We used two type of climatological data sets: gridded daily temperature from the E-OBS data set version 21.0e (Cornes et al., 2018) and direct observations from meteorological stations (data source: Latvian Environment, Geology and Meteorology Centre). The temperature and precipitation regime have changed significantly in the last century - seasonal and regional differences can be observed in the territory of Latvia.</p><p>We have digitized and analysed more than 47 thousand phenological records, fixed by volunteers in period 1970-2018. Study has shown that significant seasonal changes have taken place across the Latvian landscape due to climate change (Kalvāne and Kalvāns, 2021). The largest changes have been recorded for the unfolding (BBCH11) and flowering (BBCH61) phase of plants – almost 90% of the data included in the database demonstrate a negative trend. The winter of 1988/1989 may be considered as breaking point, it has been common that many phases have begun sooner (particularly spring phases), while abiotic autumn phases have been characterized by late years.</p><p>Study gives an overview aboutclimate change (also climate type shift) impacts on ecosystems in Latvia, particularly to forest and semi-natural grasslands and temporal and spatial changes of vegetation structure and distribution areas.</p><p>This study was carried out within the framework of the Impact of Climate Change on Phytophenological Phases and Related Risks in the Baltic Region (No. 1.1.1.2/VIAA/2/18/265) ERDF project and the Climate change and sustainable use of natural resources institutional research grant of the University of Latvia (No. AAP2016/B041//ZD2016/AZ03).</p><p>Cornes, R. C., van der Schrier, G., van den Besselaar, E. J. M. and Jones, P. D.: An Ensemble Version of the E-OBS Temperature and Precipitation Data Sets, J. Geophys. Res. Atmos., 123(17), 9391–9409, doi:10.1029/2017JD028200, 2018.</p><p>Kalvāne, G. and Kalvāns, A.(2021): Phenological trends of multi-taxonomic groups in Latvia, 1970-2018, Int. J. Biometeorol., doi:https://doi.org/10.1007/s00484-020-02068-8, 2021.</p>


Big Data ◽  
2016 ◽  
pp. 261-287
Author(s):  
Keqin Wu ◽  
Song Zhang

While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this chapter, a suite of feature-based techniques is developed to address these issues. First, an interactive visualization tool for exploring scalar data with data-level, contour-level, and topology-level uncertainties is developed. Second, a framework of visualizing feature-level uncertainty is proposed to study the uncertain feature deviations in both scalar and vector data sets. With quantified representation and interactive capability, the proposed feature-based visualizations provide new insights into the uncertainties of both data and their features which otherwise would remain unknown with the visualization of only data uncertainties.


Sign in / Sign up

Export Citation Format

Share Document