scholarly journals OPTIMIZING RADIOMETRIC PROCESSING AND FEATURE EXTRACTION OF DRONE BASED HYPERSPECTRAL FRAME FORMAT IMAGERY FOR ESTIMATION OF YIELD QUANTITY AND QUALITY OF A GRASS SWARD

Author(s):  
R. Näsi ◽  
N. Viljanen ◽  
R. Oliveira ◽  
J. Kaivosoja ◽  
O. Niemeläinen ◽  
...  

Light-weight 2D format hyperspectral imagers operable from unmanned aerial vehicles (UAV) have become common in various remote sensing tasks in recent years. Using these technologies, the area of interest is covered by multiple overlapping hypercubes, in other words multiview hyperspectral photogrammetric imagery, and each object point appears in many, even tens of individual hypercubes. The common practice is to calculate hyperspectral orthomosaics utilizing only the most nadir areas of the images. However, the redundancy of the data gives potential for much more versatile and thorough feature extraction. We investigated various options of extracting spectral features in the grass sward quantity evaluation task. In addition to the various sets of spectral features, we used photogrammetry-based ultra-high density point clouds to extract features describing the canopy 3D structure. Machine learning technique based on the Random Forest algorithm was used to estimate the fresh biomass. Results showed high accuracies for all investigated features sets. The estimation results using multiview data provided approximately 10 % better results than the most nadir orthophotos. The utilization of the photogrammetric 3D features improved estimation accuracy by approximately 40 % compared to approaches where only spectral features were applied. The best estimation RMSE of 239 kg/ha (6.0 %) was obtained with multiview anisotropy corrected data set and the 3D features.

2022 ◽  
Vol 14 (2) ◽  
pp. 248
Author(s):  
Stefano Barbieri ◽  
Saverio Di Fabio ◽  
Raffaele Lidori ◽  
Francesco L. Rossi ◽  
Frank S. Marzano ◽  
...  

Meteorological radar networks are suited to remotely provide atmospheric precipitation retrieval over a wide geographic area for severe weather monitoring and near-real-time nowcasting. However, blockage due to buildings, hills, and mountains can hamper the potential of an operational weather radar system. The Abruzzo region in central Italy’s Apennines, whose hydro-geological risks are further enhanced by its complex orography, is monitored by a heterogeneous system of three microwave radars at the C and X bands with different features. This work shows a systematic intercomparison of operational radar mosaicking methods, based on bi-dimensional rainfall products and dealing with both C and X bands as well as single- and dual-polarization systems. The considered mosaicking methods can take into account spatial radar-gauge adjustment as well as different spatial combination approaches. A data set of 16 precipitation events during the years 2018–2020 in the central Apennines is collected (with a total number of 32,750 samples) to show the potentials and limitations of the considered operational mosaicking approaches, using a geospatially-interpolated dense network of regional rain gauges as a benchmark. Results show that the radar-network pattern mosaicking, based on the anisotropic radar-gauge adjustment and spatial averaging of composite data, is better than the conventional maximum-value merging approach. The overall analysis confirms that heterogeneous weather radar mosaicking can overcome the issues of single-frequency fixed radars in mountainous areas, guaranteeing a better spatial coverage and a more uniform rainfall estimation accuracy over the area of interest.


2020 ◽  
Vol 19 (6) ◽  
pp. 944-959 ◽  
Author(s):  
Tsung-Heng Tsai ◽  
Meena Choi ◽  
Balazs Banfai ◽  
Yansheng Liu ◽  
Brendan X. MacLean ◽  
...  

In bottom-up mass spectrometry-based proteomics, relative protein quantification is often achieved with data-dependent acquisition (DDA), data-independent acquisition (DIA), or selected reaction monitoring (SRM). These workflows quantify proteins by summarizing the abundances of all the spectral features of the protein (e.g. precursor ions, transitions or fragments) in a single value per protein per run. When abundances of some features are inconsistent with the overall protein profile (for technological reasons such as interferences, or for biological reasons such as post-translational modifications), the protein-level summaries and the downstream conclusions are undermined. We propose a statistical approach that automatically detects spectral features with such inconsistent patterns. The detected features can be separately investigated, and if necessary, removed from the data set. We evaluated the proposed approach on a series of benchmark-controlled mixtures and biological investigations with DDA, DIA and SRM data acquisitions. The results demonstrated that it could facilitate and complement manual curation of the data. Moreover, it can improve the estimation accuracy, sensitivity and specificity of detecting differentially abundant proteins, and reproducibility of conclusions across different data processing tools. The approach is implemented as an option in the open-source R-based software MSstats.


2020 ◽  
Vol 27 (4) ◽  
pp. 313-320 ◽  
Author(s):  
Xuan Xiao ◽  
Wei-Jie Chen ◽  
Wang-Ren Qiu

Background: The information of quaternary structure attributes of proteins is very important because it is closely related to the biological functions of proteins. With the rapid development of new generation sequencing technology, we are facing a challenge: how to automatically identify the four-level attributes of new polypeptide chains according to their sequence information (i.e., whether they are formed as just as a monomer, or as a hetero-oligomer, or a homo-oligomer). Objective: In this article, our goal is to find a new way to represent protein sequences, thereby improving the prediction rate of protein quaternary structure. Methods: In this article, we developed a prediction system for protein quaternary structural type in which a protein sequence was expressed by combining the Pfam functional-domain and gene ontology. turn protein features into digital sequences, and complete the prediction of quaternary structure through specific machine learning algorithms and verification algorithm. Results: Our data set contains 5495 protein samples. Through the method provided in this paper, we classify proteins into monomer, or as a hetero-oligomer, or a homo-oligomer, and the prediction rate is 74.38%, which is 3.24% higher than that of previous studies. Through this new feature extraction method, we can further classify the four-level structure of proteins, and the results are also correspondingly improved. Conclusion: After the applying the new prediction system, compared with the previous results, we have successfully improved the prediction rate. We have reason to believe that the feature extraction method in this paper has better practicability and can be used as a reference for other protein classification problems.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 924
Author(s):  
Zhenzhen Huang ◽  
Qiang Niu ◽  
Ilsun You ◽  
Giovanni Pau

Wearable devices used for human body monitoring has broad applications in smart home, sports, security and other fields. Wearable devices provide an extremely convenient way to collect a large amount of human motion data. In this paper, the human body acceleration feature extraction method based on wearable devices is studied. Firstly, Butterworth filter is used to filter the data. Then, in order to ensure the extracted feature value more accurately, it is necessary to remove the abnormal data in the source. This paper combines Kalman filter algorithm with a genetic algorithm and use the genetic algorithm to code the parameters of the Kalman filter algorithm. We use Standard Deviation (SD), Interval of Peaks (IoP) and Difference between Adjacent Peaks and Troughs (DAPT) to analyze seven kinds of acceleration. At last, SisFall data set, which is a globally available data set for study and experiments, is used for experiments to verify the effectiveness of our method. Based on simulation results, we can conclude that our method can distinguish different activity clearly.


Author(s):  
Suyong Yeon ◽  
ChangHyun Jun ◽  
Hyunga Choi ◽  
Jaehyeon Kang ◽  
Youngmok Yun ◽  
...  

Purpose – The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data. Design/methodology/approach – The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy. Findings – Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features). Originality/value – The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.


2019 ◽  
Vol 13 (11) ◽  
pp. 3045-3059 ◽  
Author(s):  
Nick Rutter ◽  
Melody J. Sandells ◽  
Chris Derksen ◽  
Joshua King ◽  
Peter Toose ◽  
...  

Abstract. Spatial variability in snowpack properties negatively impacts our capacity to make direct measurements of snow water equivalent (SWE) using satellites. A comprehensive data set of snow microstructure (94 profiles at 36 sites) and snow layer thickness (9000 vertical profiles across nine trenches) collected over two winters at Trail Valley Creek, NWT, Canada, was applied in synthetic radiative transfer experiments. This allowed for robust assessment of the impact of estimation accuracy of unknown snow microstructural characteristics on the viability of SWE retrievals. Depth hoar layer thickness varied over the shortest horizontal distances, controlled by subnivean vegetation and topography, while variability in total snowpack thickness approximated that of wind slab layers. Mean horizontal correlation lengths of layer thickness were less than a metre for all layers. Depth hoar was consistently ∼30 % of total depth, and with increasing total depth the proportion of wind slab increased at the expense of the decreasing surface snow layer. Distinct differences were evident between distributions of layer properties; a single median value represented density and specific surface area (SSA) of each layer well. Spatial variability in microstructure of depth hoar layers dominated SWE retrieval errors. A depth hoar SSA estimate of around 7 % under the median value was needed to accurately retrieve SWE. In shallow snowpacks <0.6 m, depth hoar SSA estimates of ±5 %–10 % around the optimal retrieval SSA allowed SWE retrievals within a tolerance of ±30 mm. Where snowpacks were deeper than ∼30 cm, accurate values of representative SSA for depth hoar became critical as retrieval errors were exceeded if the median depth hoar SSA was applied.


2014 ◽  
Vol 10 (S309) ◽  
pp. 297-297
Author(s):  
Flor Allaert

AbstractEach component of a galaxy plays its own unique role in regulating the galaxy's evolution. In order to understand how galaxies form and evolve, it is therefore crucial to study the distribution and properties of each of the various components, and the links between them, both radially and vertically. The latter is only possible in edge-on systems. We present the HEROES project, which aims to investigate the 3D structure of the interstellar gas, dust, stars and dark matter in a sample of 7 massive early-type spiral galaxies based on a multi-wavelength data set including optical, NIR, FIR and radio data.


2017 ◽  
Vol 10 (3) ◽  
pp. 310-331 ◽  
Author(s):  
Sudeep Thepade ◽  
Rik Das ◽  
Saurav Ghosh

Purpose Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques. Design/methodology/approach Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. Findings The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose. Originality/value To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2021 ◽  
Vol 87 (4) ◽  
pp. 283-293
Author(s):  
Wei Wang ◽  
Yuan Xu ◽  
Yingchao Ren ◽  
Gang Wang

Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance improvement of the proposed method are further analyzed.


Sign in / Sign up

Export Citation Format

Share Document