scholarly journals A Data Augmentation-Based Evaluation System for Regional Direct Economic Losses of Storm Surge Disasters

Author(s):  
Hai Sun ◽  
Jin Wang ◽  
Wentao Ye

The accurate prediction of storm surge disasters’ direct economic losses plays a positive role in providing critical support for disaster prevention decision-making and management. Previous researches on storm surge disaster loss assessment did not pay much attention to the overfitting phenomenon caused by the data scarcity and the excessive model complexity. To solve these problems, this paper puts forward a new evaluation system for forecasting the regional direct economic loss of storm surge disasters, consisting of three parts. First of all, a comprehensive assessment index system was established by considering the storm surge disasters’ formation mechanism and the corresponding risk management theory. Secondly, a novel data augmentation technique, k-nearest neighbor-Gaussian noise (KNN-GN), was presented to overcome data scarcity. Thirdly, an ensemble learning algorithm XGBoost as a regression model was utilized to optimize the results and produce the final forecasting results. To verify the best-combined model, KNN-GN-based XGBoost, we conducted cross-contrast experiments with several data augmentation techniques and some widely-used ensemble learning models. Meanwhile, the traditional prediction models are used as baselines to the optimized forecasting system. The experimental results show that the KNN-GN-based XGBoost model provides more precise predictions than the traditional models, with a 64.1% average improvement in the mean absolute percentage error (MAPE) measurement. It could be noted that the proposed evaluation system can be extended and applied to the geography-related field as well.

2013 ◽  
Vol 709 ◽  
pp. 928-935
Author(s):  
Ling Di Zhao ◽  
Qing Hao

This paper took Guangdong province as an example, using the statistical data of twenty times storm surges from 2003 to 2010 to evaluate the disasters and predict the economic losses. We expected it to supply with sound references and proof for the decision-makers to prevent storm surges. With economic indices of direct economic losses, collapsed houses, damaged farmland area, et al., this paper used entropy method and factor analysis method to grade the storm surges into separate levels, which are the mild disaster, the moderate disaster, the serious disaster and the extra serious disaster. By BP neural networks and gray prediction method, we established the evaluation and prediction models of direct economic losses. Comparing the results of both methods, it found that neural network is more applicable and accurate to predict the economic losses of storm surges.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Li-Hsin Cheng ◽  
Te-Cheng Hsu ◽  
Che Lin

AbstractBreast cancer is a heterogeneous disease. To guide proper treatment decisions for each patient, robust prognostic biomarkers, which allow reliable prognosis prediction, are necessary. Gene feature selection based on microarray data is an approach to discover potential biomarkers systematically. However, standard pure-statistical feature selection approaches often fail to incorporate prior biological knowledge and select genes that lack biological insights. Besides, due to the high dimensionality and low sample size properties of microarray data, selecting robust gene features is an intrinsically challenging problem. We hence combined systems biology feature selection with ensemble learning in this study, aiming to select genes with biological insights and robust prognostic predictive power. Moreover, to capture breast cancer's complex molecular processes, we adopted a multi-gene approach to predict the prognosis status using deep learning classifiers. We found that all ensemble approaches could improve feature selection robustness, wherein the hybrid ensemble approach led to the most robust result. Among all prognosis prediction models, the bimodal deep neural network (DNN) achieved the highest test performance, further verified by survival analysis. In summary, this study demonstrated the potential of combining ensemble learning and bimodal DNN in guiding precision medicine.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5777
Author(s):  
Esraa Eldesouky ◽  
Mahmoud Bekhit ◽  
Ahmed Fathalla ◽  
Ahmad Salah ◽  
Ahmed Ali

The use of underwater wireless sensor networks (UWSNs) for collaborative monitoring and marine data collection tasks is rapidly increasing. One of the major challenges associated with building these networks is handover prediction; this is because the mobility model of the sensor nodes is different from that of ground-based wireless sensor network (WSN) devices. Therefore, handover prediction is the focus of the present work. There have been limited efforts in addressing the handover prediction problem in UWSNs and in the use of ensemble learning in handover prediction for UWSNs. Hence, we propose the simulation of the sensor node mobility using real marine data collected by the Korea Hydrographic and Oceanographic Agency. These data include the water current speed and direction between data. The proposed simulation consists of a large number of sensor nodes and base stations in a UWSN. Next, we collected the handover events from the simulation, which were utilized as a dataset for the handover prediction task. Finally, we utilized four machine learning prediction algorithms (i.e., gradient boosting, decision tree (DT), Gaussian naive Bayes (GNB), and K-nearest neighbor (KNN)) to predict handover events based on historically collected handover events. The obtained prediction accuracy rates were above 95%. The best prediction accuracy rate achieved by the state-of-the-art method was 56% for any UWSN. Moreover, when the proposed models were evaluated on performance metrics, the measured evolution scores emphasized the high quality of the proposed prediction models. While the ensemble learning model outperformed the GNB and KNN models, the performance of ensemble learning and decision tree models was almost identical.


2019 ◽  
Vol 91 (sp1) ◽  
pp. 31
Author(s):  
Yeong-Yeon Kwon ◽  
Jung-Woon Choi ◽  
HoJin Kim ◽  
Seonjeong Kim ◽  
Jae-Il Kwon

Plant Methods ◽  
2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Emina Mulaosmanovic ◽  
Tobias U. T. Lindblom ◽  
Marie Bengtsson ◽  
Sofia T. Windstam ◽  
Lars Mogren ◽  
...  

Abstract Background Field-grown leafy vegetables can be damaged by biotic and abiotic factors, or mechanically damaged by farming practices. Available methods to evaluate leaf tissue damage mainly rely on colour differentiation between healthy and damaged tissues. Alternatively, sophisticated equipment such as microscopy and hyperspectral cameras can be employed. Depending on the causal factor, colour change in the wounded area is not always induced and, by the time symptoms become visible, a plant can already be severely affected. To accurately detect and quantify damage on leaf scale, including microlesions, reliable differentiation between healthy and damaged tissue is essential. We stained whole leaves with trypan blue dye, which traverses compromised cell membranes but is not absorbed in viable cells, followed by automated quantification of damage on leaf scale. Results We present a robust, fast and sensitive method for leaf-scale visualisation, accurate automated extraction and measurement of damaged area on leaves of leafy vegetables. The image analysis pipeline we developed automatically identifies leaf area and individual stained (lesion) areas down to cell level. As proof of principle, we tested the methodology for damage detection and quantification on two field-grown leafy vegetable species, spinach and Swiss chard. Conclusions Our novel lesion quantification method can be used for detection of large (macro) or single-cell (micro) lesions on leaf scale, enabling quantification of lesions at any stage and without requiring symptoms to be in the visible spectrum. Quantifying the wounded area on leaf scale is necessary for generating prediction models for economic losses and produce shelf-life. In addition, risk assessments are based on accurate prediction of the relationship between leaf damage and infection rates by opportunistic pathogens and our method helps determine the severity of leaf damage at fine resolution.


Author(s):  
Ruchika Malhotra ◽  
Anuradha Chug

Software maintenance is an expensive activity that consumes a major portion of the cost of the total project. Various activities carried out during maintenance include the addition of new features, deletion of obsolete code, correction of errors, etc. Software maintainability means the ease with which these operations can be carried out. If the maintainability can be measured in early phases of the software development, it helps in better planning and optimum resource utilization. Measurement of design properties such as coupling, cohesion, etc. in early phases of development often leads us to derive the corresponding maintainability with the help of prediction models. In this paper, we performed a systematic review of the existing studies related to software maintainability from January 1991 to October 2015. In total, 96 primary studies were identified out of which 47 studies were from journals, 36 from conference proceedings and 13 from others. All studies were compiled in structured form and analyzed through numerous perspectives such as the use of design metrics, prediction model, tools, data sources, prediction accuracy, etc. According to the review results, we found that the use of machine learning algorithms in predicting maintainability has increased since 2005. The use of evolutionary algorithms has also begun in related sub-fields since 2010. We have observed that design metrics is still the most favored option to capture the characteristics of any given software before deploying it further in prediction model for determining the corresponding software maintainability. A significant increase in the use of public dataset for making the prediction models has also been observed and in this regard two public datasets User Interface Management System (UIMS) and Quality Evaluation System (QUES) proposed by Li and Henry is quite popular among researchers. Although machine learning algorithms are still the most popular methods, however, we suggest that researchers working on software maintainability area should experiment on the use of open source datasets with hybrid algorithms. In this regard, more empirical studies are also required to be conducted on a large number of datasets so that a generalized theory could be made. The current paper will be beneficial for practitioners, researchers and developers as they can use these models and metrics for creating benchmark and standards. Findings of this extensive review would also be useful for novices in the field of software maintainability as it not only provides explicit definitions, but also lays a foundation for further research by providing a quick link to all important studies in the said field. Finally, this study also compiles current trends, emerging sub-fields and identifies various opportunities of future research in the field of software maintainability.


Sign in / Sign up

Export Citation Format

Share Document