scholarly journals Classification of Cattle Behaviours Using Neck-Mounted Accelerometer-Equipped Collars and Convolutional Neural Networks

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4050
Author(s):  
Dejan Pavlovic ◽  
Christopher Davison ◽  
Andrew Hamilton ◽  
Oskar Marko ◽  
Robert Atkinson ◽  
...  

Monitoring cattle behaviour is core to the early detection of health and welfare issues and to optimise the fertility of large herds. Accelerometer-based sensor systems that provide activity profiles are now used extensively on commercial farms and have evolved to identify behaviours such as the time spent ruminating and eating at an individual animal level. Acquiring this information at scale is central to informing on-farm management decisions. The paper presents the development of a Convolutional Neural Network (CNN) that classifies cattle behavioural states (`rumination’, `eating’ and `other’) using data generated from neck-mounted accelerometer collars. During three farm trials in the United Kingdom (Easter Howgate Farm, Edinburgh, UK), 18 steers were monitored to provide raw acceleration measurements, with ground truth data provided by muzzle-mounted pressure sensor halters. A range of neural network architectures are explored and rigorous hyper-parameter searches are performed to optimise the network. The computational complexity and memory footprint of CNN models are not readily compatible with deployment on low-power processors which are both memory and energy constrained. Thus, progressive reductions of the CNN were executed with minimal loss of performance in order to address the practical implementation challenges, defining the trade-off between model performance versus computation complexity and memory footprint to permit deployment on micro-controller architectures. The proposed methodology achieves a compression of 14.30 compared to the unpruned architecture but is nevertheless able to accurately classify cattle behaviours with an overall F1 score of 0.82 for both FP32 and FP16 precision while achieving a reasonable battery lifetime in excess of 5.7 years.

2021 ◽  
pp. 0021955X2110210
Author(s):  
Alejandro E Rodríguez-Sánchez ◽  
Héctor Plascencia-Mora

Traditional modeling of mechanical energy absorption due to compressive loadings in expanded polystyrene foams involves mathematical descriptions that are derived from stress/strain continuum mechanics models. Nevertheless, most of those models are either constrained using the strain as the only variable to work at large deformation regimes and usually neglect important parameters for energy absorption properties such as the material density or the rate of the applying load. This work presents a neural-network-based approach that produces models that are capable to map the compressive stress response and energy absorption parameters of an expanded polystyrene foam by considering its deformation, compressive loading rates, and different densities. The models are trained with ground-truth data obtained in compressive tests. Two methods to select neural network architectures are also presented, one of which is based on a Design of Experiments strategy. The results show that it is possible to obtain a single artificial neural networks model that can abstract stress and energy absorption solution spaces for the conditions studied in the material. Additionally, such a model is compared with a phenomenological model, and the results show than the neural network model outperforms it in terms of prediction capabilities, since errors around 2% of experimental data were obtained. In this sense, it is demonstrated that by following the presented approach is possible to obtain a model capable to reproduce compressive polystyrene foam stress/strain data, and consequently, to simulate its energy absorption parameters.


Author(s):  
J. Sánchez ◽  
F. Camacho ◽  
R. Lacaze ◽  
B. Smets

This study investigates the scientific quality of the GEOV1 Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) and Fraction of Vegetation Cover (FCover) products based on PROBA-V observations. The procedure follows, as much as possible, the guidelines, protocols and metrics defined by the Land Product Validation (LPV) group of the Committee on Earth Observation Satellite (CEOS) for the validation of satellite-derived land products. This study is focused on the consistency of SPOT/VGT and PROBA-V GEOV1 products developed in the framework of the Copernicus Global Land Services, providing an early validation of PROBA-V GEOV1 products using data from November 2013 to May 2014, during the overlap period (November 2013-May 2014). The first natural year of PROBA-V GEOV1 products (2014) was considered for the rest of the quality assessment including comparisons with MODIS C5. Several criteria of performance were evaluated including product completeness, spatial consistency, temporal consistency, intra-annual precision and accuracy. Firstly, and inter-comparison with both spatial and temporal consistency were evaluated with reference satellite products (SPOT/VGT GEOV1 and MODIS C5) are presented over a network of sites (BELMANIP2.1). Secondly, the accuracy of PROBA-V GEOV1 products was evaluated against a number of concomitant agricultural sites is presented. The ground data was collected and up-scaled using high resolution imagery in the context of the FP7 ImagineS project in support of the evolution of Copernicus Land Service. Our results demonstrate that GEOV1 PROBA-V products were found spatially and temporally consistent with similar products (SPOT/VGT, MODISC5), and good agreement with limited ground truth data with an accuracy (RMSE) of 0.52 for LAI, 0.11 for FAPAR and 0.14 for FCover, showing a slight bias for FCover for higher values.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 446 ◽  
Author(s):  
Evangelos Alevizos ◽  
Jens Greinert

This study presents a novel approach, based on high-dimensionality hydro-acoustic data, for improving the performance of angular response analysis (ARA) on multibeam backscatter data in terms of acoustic class separation and spatial resolution. This approach is based on the hyper-angular cube (HAC) data structure which offers the possibility to extract one angular response from each cell of the cube. The HAC consists of a finite number of backscatter layers, each representing backscatter values corresponding to single-incidence angle ensonifications. The construction of the HAC layers can be achieved either by interpolating dense soundings from highly overlapping multibeam echo-sounder (MBES) surveys (interpolated HAC, iHAC) or by producing several backscatter mosaics, each being normalized at a different incidence angle (synthetic HAC, sHAC). The latter approach can be applied to multibeam data with standard overlap, thus minimizing the cost for data acquisition. The sHAC is as efficient as the iHAC produced by actual soundings, providing distinct angular responses for each seafloor type. The HAC data structure increases acoustic class separability between different acoustic features. Moreover, the results of angular response analysis are applied on a fine spatial scale (cell dimensions) offering more detailed acoustic maps of the seafloor. Considering that angular information is expressed through high-dimensional backscatter layers, we further applied three machine learning algorithms (random forest, support vector machine, and artificial neural network) and one pattern recognition method (sum of absolute differences) for supervised classification of the HAC, using a limited amount of ground truth data (one sample per seafloor type). Results from supervised classification were compared with results from an unsupervised method for inter-comparison of the supervised algorithms. It was found that all algorithms (regarding both the iHAC and the sHAC) produced very similar results with good agreement (>0.5 kappa) with the unsupervised classification. Only the artificial neural network required the total amount of ground truth data for producing comparable results with the remaining algorithms.


Author(s):  
Carla Sendra-Balcells ◽  
Ricardo Salvador ◽  
Juan B. Pedro ◽  
M C Biagi ◽  
Charlène Aubinet ◽  
...  

AbstractThe segmentation of structural MRI data is an essential step for deriving geometrical information about brain tissues. One important application is in transcranial electrical stimulation (e.g., tDCS), a non-invasive neuromodulatory technique where head modeling is required to determine the electric field (E-field) generated in the cortex to predict and optimize its effects. Here we propose a deep learning-based model (StarNEt) to automatize white matter (WM) and gray matter (GM) segmentation and compare its performance with FreeSurfer, an established tool. Since good definition of sulci and gyri in the cortical surface is an important requirement for E-field calculation, StarNEt is specifically designed to output masks at a higher resolution than that of the original input T1w-MRI. StarNEt uses a residual network as the encoder (ResNet) and a fully convolutional neural network with U-net skip connections as the decoder to segment an MRI slice by slice. Slice vertical location is provided as an extra input. The model was trained on scans from 425 patients in the open-access ADNI+IXI datasets, and using FreeSurfer segmentation as ground truth. Model performance was evaluated using the Dice Coefficient (DC) in a separate subset (N=105) of ADNI+IXI and in two extra testing sets not involved in training. In addition, FreeSurfer and StarNEt were compared to manual segmentations of the MRBrainS18 dataset, also unseen by the model. To study performance in real use cases, first, we created electrical head models derived from the FreeSurfer and StarNEt segmentations and used them for montage optimization with a common target region using a standard algorithm (Stimweaver) and second, we used StarNEt to successfully segment the brains of minimally conscious state (MCS) patients having suffered from brain trauma, a scenario where FreeSurfer typically fails. Our results indicate that StarNEt matches FreeSurfer performance on the trained tasks while reducing computation time from several hours to a few seconds, and with the potential to evolve into an effective technique even when patients present large brain abnormalities.


2012 ◽  
Vol 16 (8) ◽  
pp. 2801-2811 ◽  
Author(s):  
M. T. Vu ◽  
S. V. Raghavan ◽  
S. Y. Liong

Abstract. Many research studies that focus on basin hydrology have applied the SWAT model using station data to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) a modified version of Global Historical Climatology Network (GHCN2) and one reanalysis dataset, National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dak Bla river (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. Results indicate that the APHRODITE dataset performed very well on a daily scale simulation of discharge having a good NSE of 0.54 and R2 of 0.55, when compared to the discharge simulation using station data (0.68 and 0.71). The GPCP proved to be the next best dataset that was applied to the runoff modelling, with NSE and R2 of 0.46 and 0.51, respectively. The PERSIANN and TRMM rainfall data driven runoff did not show good agreement compared to the station data as both the NSE and R2 indices showed a low value of 0.3. GHCN2 and NCEP also did not show good correlations. The varied results by using these datasets indicate that although the gauge based and satellite-gauge merged products use some ground truth data, the different interpolation techniques and merging algorithms could also be a source of uncertainties. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6136
Author(s):  
Janez Podobnik ◽  
David Kraljić ◽  
Matjaž Zadravec ◽  
Marko Munih

Estimation of the centre of pressure (COP) is an important part of the gait analysis, for example, when evaluating the functional capacity of individuals affected by motor impairment. Inertial measurement units (IMUs) and force sensors are commonly used to measure gait characteristic of healthy and impaired subjects. We present a methodology for estimating the COP solely from raw gyroscope, accelerometer, and magnetometer data from IMUs using statistical modelling. We demonstrate the viability of the method using an example of two models: a linear model and a non-linear Long-Short-Term Memory (LSTM) neural network model. Models were trained on the COP ground truth data measured using an instrumented treadmill and achieved the average intra-subject root mean square (RMS) error between estimated and ground truth COP of 12.3 mm and the average inter-subject RMS error of 23.7 mm which is comparable or better than similar studies so far. We show that the calibration procedure in the instrumented treadmill can be as short as a couple of minutes without the decrease in our model performance. We also show that the magnetic component of the recorded IMU signal, which is most sensitive to environmental changes, can be safely dropped without a significant decrease in model performance. Finally, we show that the number of IMUs can be reduced to five without deterioration in the model performance.


Author(s):  
Ruohan Gong ◽  
Zuqi Tang

Purpose This paper aims to investigate the approach combine the deep learning (DL) and finite element method for the magneto-thermal coupled problem. Design/methodology/approach To achieve the DL of electrical device with the hypothesis of a small dataset, with ground truth data obtained from the FEM analysis, U-net, a highly efficient convolutional neural network (CNN) is used to extract hidden features and trained in a supervised manner to predict the magneto-thermal coupled analysis results for different topologies. Using part of the FEM results as training samples, the DL model obtained from effective off-line training can be used to predict the distribution of the magnetic field and temperature field of other cases. Findings The possibility and feasibility of the proposed approach are investigated by discussing the influence of various network parameters, in particular, the four most important factors are training sample size, learning rate, batch size and optimization algorithm respectively. It is shown that DL based on U-net can be used as an efficiency tool in multi-physics analysis and achieve good performance with only small datasets. Originality/value It is shown that DL based on U-net can be used as an efficiency tool in multi-physics analysis and achieve good performance with only small datasets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luciana Nieto ◽  
Raí Schwalbert ◽  
P. V. Vara Prasad ◽  
Bradley J. S. C. Olson ◽  
Ignacio A. Ciampitti

AbstractEfficient, more accurate reporting of maize (Zea mays L.) phenology, crop condition, and progress is crucial for agronomists and policy makers. Integration of satellite imagery with machine learning models has shown great potential to improve crop classification and facilitate in-season phenological reports. However, crop phenology classification precision must be substantially improved to transform data into actionable management decisions for farmers and agronomists. An integrated approach utilizing ground truth field data for maize crop phenology (2013–2018 seasons), satellite imagery (Landsat 8), and weather data was explored with the following objectives: (i) model training and validation—identify the best combination of spectral bands, vegetation indices (VIs), weather parameters, geolocation, and ground truth data, resulting in a model with the highest accuracy across years at each season segment (step one) and (ii) model testing—post-selection model performance evaluation for each phenology class with unseen data (hold-out cross-validation) (step two). The best model performance for classifying maize phenology was documented when VIs (NDVI, EVI, GCVI, NDWI, GVMI) and vapor pressure deficit (VPD) were used as input variables. This study supports the integration of field ground truth, satellite imagery, and weather data to classify maize crop phenology, thereby facilitating foundational decision making and agricultural interventions for the different members of the agricultural chain.


AI ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 444-463
Author(s):  
Daniel Weber ◽  
Clemens Gühmann ◽  
Thomas Seel

Inertial-sensor-based attitude estimation is a crucial technology in various applications, from human motion tracking to autonomous aerial and ground vehicles. Application scenarios differ in characteristics of the performed motion, presence of disturbances, and environmental conditions. Since state-of-the-art attitude estimators do not generalize well over these characteristics, their parameters must be tuned for the individual motion characteristics and circumstances. We propose RIANN, a ready-to-use, neural network-based, parameter-free, real-time-capable inertial attitude estimator, which generalizes well across different motion dynamics, environments, and sampling rates, without the need for application-specific adaptations. We gather six publicly available datasets of which we exploit two datasets for the method development and the training, and we use four datasets for evaluation of the trained estimator in three different test scenarios with varying practical relevance. Results show that RIANN outperforms state-of-the-art attitude estimation filters in the sense that it generalizes much better across a variety of motions and conditions in different applications, with different sensor hardware and different sampling frequencies. This is true even if the filters are tuned on each individual test dataset, whereas RIANN was trained on completely separate data and has never seen any of these test datasets. RIANN can be applied directly without adaptations or training and is therefore expected to enable plug-and-play solutions in numerous applications, especially when accuracy is crucial but no ground-truth data is available for tuning or when motion and disturbance characteristics are uncertain. We made RIANN publicly available.


Sign in / Sign up

Export Citation Format

Share Document