scholarly journals Spatial fraction verification of high resolution model

2021 ◽  
Vol 893 (1) ◽  
pp. 012036
Author(s):  
I Made Kembar Tirtanegara ◽  
Fitria Puspita Sari

Abstract Fraction Skill Score (FSS) is one of spatial verification method to evaluate model performance on spatial scale variations. The method was applied to assess the Weather Research and Forecasting (WRF) model using 2 km (MODEL2KM) and 6 km (MODEL6KM) grid size. Cloud Top Temperature (CTT) data from Himawari-8 satellite was utilised as a ground truth data. This study aims to evaluate the model performance by FSS with absolute and percentile treshold on convective cloud simulation for three heavy rain events. The threshold considers the evaluation of absolute and percentile aspect. The result shows that there is no significant change in the FSS value for resolution increase of MODEL2KM compared to MODEL6KM. Also, the events of heavy rain having a lower CTT generate a higher FSS value for absolute threshold. Whilst, the percentile threshold for three cases have a greater FSS value, though it cannot provide the information of CTT absolute temperature value.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4050
Author(s):  
Dejan Pavlovic ◽  
Christopher Davison ◽  
Andrew Hamilton ◽  
Oskar Marko ◽  
Robert Atkinson ◽  
...  

Monitoring cattle behaviour is core to the early detection of health and welfare issues and to optimise the fertility of large herds. Accelerometer-based sensor systems that provide activity profiles are now used extensively on commercial farms and have evolved to identify behaviours such as the time spent ruminating and eating at an individual animal level. Acquiring this information at scale is central to informing on-farm management decisions. The paper presents the development of a Convolutional Neural Network (CNN) that classifies cattle behavioural states (`rumination’, `eating’ and `other’) using data generated from neck-mounted accelerometer collars. During three farm trials in the United Kingdom (Easter Howgate Farm, Edinburgh, UK), 18 steers were monitored to provide raw acceleration measurements, with ground truth data provided by muzzle-mounted pressure sensor halters. A range of neural network architectures are explored and rigorous hyper-parameter searches are performed to optimise the network. The computational complexity and memory footprint of CNN models are not readily compatible with deployment on low-power processors which are both memory and energy constrained. Thus, progressive reductions of the CNN were executed with minimal loss of performance in order to address the practical implementation challenges, defining the trade-off between model performance versus computation complexity and memory footprint to permit deployment on micro-controller architectures. The proposed methodology achieves a compression of 14.30 compared to the unpruned architecture but is nevertheless able to accurately classify cattle behaviours with an overall F1 score of 0.82 for both FP32 and FP16 precision while achieving a reasonable battery lifetime in excess of 5.7 years.


2012 ◽  
Vol 16 (8) ◽  
pp. 2801-2811 ◽  
Author(s):  
M. T. Vu ◽  
S. V. Raghavan ◽  
S. Y. Liong

Abstract. Many research studies that focus on basin hydrology have applied the SWAT model using station data to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) a modified version of Global Historical Climatology Network (GHCN2) and one reanalysis dataset, National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dak Bla river (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. Results indicate that the APHRODITE dataset performed very well on a daily scale simulation of discharge having a good NSE of 0.54 and R2 of 0.55, when compared to the discharge simulation using station data (0.68 and 0.71). The GPCP proved to be the next best dataset that was applied to the runoff modelling, with NSE and R2 of 0.46 and 0.51, respectively. The PERSIANN and TRMM rainfall data driven runoff did not show good agreement compared to the station data as both the NSE and R2 indices showed a low value of 0.3. GHCN2 and NCEP also did not show good correlations. The varied results by using these datasets indicate that although the gauge based and satellite-gauge merged products use some ground truth data, the different interpolation techniques and merging algorithms could also be a source of uncertainties. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6136
Author(s):  
Janez Podobnik ◽  
David Kraljić ◽  
Matjaž Zadravec ◽  
Marko Munih

Estimation of the centre of pressure (COP) is an important part of the gait analysis, for example, when evaluating the functional capacity of individuals affected by motor impairment. Inertial measurement units (IMUs) and force sensors are commonly used to measure gait characteristic of healthy and impaired subjects. We present a methodology for estimating the COP solely from raw gyroscope, accelerometer, and magnetometer data from IMUs using statistical modelling. We demonstrate the viability of the method using an example of two models: a linear model and a non-linear Long-Short-Term Memory (LSTM) neural network model. Models were trained on the COP ground truth data measured using an instrumented treadmill and achieved the average intra-subject root mean square (RMS) error between estimated and ground truth COP of 12.3 mm and the average inter-subject RMS error of 23.7 mm which is comparable or better than similar studies so far. We show that the calibration procedure in the instrumented treadmill can be as short as a couple of minutes without the decrease in our model performance. We also show that the magnetic component of the recorded IMU signal, which is most sensitive to environmental changes, can be safely dropped without a significant decrease in model performance. Finally, we show that the number of IMUs can be reduced to five without deterioration in the model performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luciana Nieto ◽  
Raí Schwalbert ◽  
P. V. Vara Prasad ◽  
Bradley J. S. C. Olson ◽  
Ignacio A. Ciampitti

AbstractEfficient, more accurate reporting of maize (Zea mays L.) phenology, crop condition, and progress is crucial for agronomists and policy makers. Integration of satellite imagery with machine learning models has shown great potential to improve crop classification and facilitate in-season phenological reports. However, crop phenology classification precision must be substantially improved to transform data into actionable management decisions for farmers and agronomists. An integrated approach utilizing ground truth field data for maize crop phenology (2013–2018 seasons), satellite imagery (Landsat 8), and weather data was explored with the following objectives: (i) model training and validation—identify the best combination of spectral bands, vegetation indices (VIs), weather parameters, geolocation, and ground truth data, resulting in a model with the highest accuracy across years at each season segment (step one) and (ii) model testing—post-selection model performance evaluation for each phenology class with unseen data (hold-out cross-validation) (step two). The best model performance for classifying maize phenology was documented when VIs (NDVI, EVI, GCVI, NDWI, GVMI) and vapor pressure deficit (VPD) were used as input variables. This study supports the integration of field ground truth, satellite imagery, and weather data to classify maize crop phenology, thereby facilitating foundational decision making and agricultural interventions for the different members of the agricultural chain.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-38
Author(s):  
Rita Sevastjanova ◽  
Wolfgang Jentner ◽  
Fabian Sperrle ◽  
Rebecca Kehlbeck ◽  
Jürgen Bernard ◽  
...  

Linguistic insight in the form of high-level relationships and rules in text builds the basis of our understanding of language. However, the data-driven generation of such structures often lacks labeled resources that can be used as training data for supervised machine learning. The creation of such ground-truth data is a time-consuming process that often requires domain expertise to resolve text ambiguities and characterize linguistic phenomena. Furthermore, the creation and refinement of machine learning models is often challenging for linguists as the models are often complex, in-transparent, and difficult to understand. To tackle these challenges, we present a visual analytics technique for interactive data labeling that applies concepts from gamification and explainable Artificial Intelligence (XAI) to support complex classification tasks. The visual-interactive labeling interface promotes the creation of effective training data. Visual explanations of learned rules unveil the decisions of the machine learning model and support iterative and interactive optimization. The gamification-inspired design guides the user through the labeling process and provides feedback on the model performance. As an instance of the proposed technique, we present QuestionComb , a workspace tailored to the task of question classification (i.e., in information-seeking vs. non-information-seeking questions). Our evaluation studies confirm that gamification concepts are beneficial to engage users through continuous feedback, offering an effective visual analytics technique when combined with active learning and XAI.


2016 ◽  
Vol 36 (1) ◽  
pp. 3-15 ◽  
Author(s):  
Will Maddern ◽  
Geoffrey Pascoe ◽  
Chris Linegar ◽  
Paul Newman

We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, direct sunlight and snow. Road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments. The full dataset is available for download at: http://robotcar-dataset.robots.ox.ac.uk


2009 ◽  
Vol 137 (10) ◽  
pp. 3351-3372 ◽  
Author(s):  
Craig S. Schwartz ◽  
John S. Kain ◽  
Steven J. Weiss ◽  
Ming Xue ◽  
David R. Bright ◽  
...  

Abstract During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.


2021 ◽  
Vol 13 (10) ◽  
pp. 1966
Author(s):  
Christopher W Smith ◽  
Santosh K Panda ◽  
Uma S Bhatt ◽  
Franz J Meyer ◽  
Anushree Badola ◽  
...  

In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent.


2021 ◽  
Author(s):  
Ali Abdolali ◽  
Andre van der Westhuysen ◽  
Zaizhong Ma ◽  
Avichal Mehra ◽  
Aron Roland ◽  
...  

AbstractVarious uncertainties exist in a hindcast due to the inabilities of numerical models to resolve all the complicated atmosphere-sea interactions, and the lack of certain ground truth observations. Here, a comprehensive analysis of an atmospheric model performance in hindcast mode (Hurricane Weather and Research Forecasting model—HWRF) and its 40 ensembles during severe events is conducted, evaluating the model accuracy and uncertainty for hurricane track parameters, and wind speed collected along satellite altimeter tracks and at stationary source point observations. Subsequently, the downstream spectral wave model WAVEWATCH III is forced by two sets of wind field data, each includes 40 members. The first ones are randomly extracted from original HWRF simulations and the second ones are based on spread of best track parameters. The atmospheric model spread and wave model error along satellite altimeters tracks and at stationary source point observations are estimated. The study on Hurricane Irma reveals that wind and wave observations during this extreme event are within ensemble spreads. While both Models have wide spreads over areas with landmass, maximum uncertainty in the atmospheric model is at hurricane eye in contrast to the wave model.


Sign in / Sign up

Export Citation Format

Share Document