New active learning algorithms for near-infrared spectroscopy in agricultural applications

2021 ◽  
Vol 69 (4) ◽  
pp. 297-306
Author(s):  
Julius Krause ◽  
Maurice Günder ◽  
Daniel Schulz ◽  
Robin Gruna

Abstract The selection of training data determines the quality of a chemometric calibration model. In order to cover the entire parameter space of known influencing parameters, an experimental design is usually created. Nevertheless, even with a carefully prepared Design of Experiment (DoE), redundant reference analyses are often performed during the analysis of agricultural products. Because the number of possible reference analyses is usually very limited, the presented active learning approaches are intended to provide a tool for better selection of training samples.

2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


2018 ◽  
Vol 150 ◽  
pp. 05005
Author(s):  
Nur Farha Bte Hassan ◽  
Saifullizam Bin Puteh ◽  
Amanina Binti Muhamad Sanusi

The application of technology innovation is rapidly increasing in industries and educational institutions. This phenomenon has led to the emergence of Technology Enabled/Enhanced Active Learning (TEAL) which emphasizes the use of various techniques and technologies. TEAL is a new learning format that combines educational content from a lecturer, simulation, and student’s experiences using technological tools to provide a rich collaborative learning experience for students. This approach is used to provide academic professional development that brings innovation to the learning content, practically by using pedagogy, technology and classroom design. TEAL ensures the enhanced development of student's knowledge and skills in order to produce quality skilful workers with adequate employability skills. Technology is an effective tool used to facilitate the teaching and learning process, which can, in turn, create an active environment for students to build their knowledge, skill and experience. This paper determines the elements of TEAL based on interview sessions with expert academicians and from a systematic literature review. The selection of TEAL elements for this study was carried out using thematic analysis approach. Findings show that these TEAL elements would help institutions to promote students in involving themselves in active learning in order to enhance the quality of graduates in improving their technical knowledge, thereby enhancing their employability skills.


Author(s):  
Liming Li ◽  
Xiaodong Chai ◽  
Shuguang Zhao ◽  
Shubin Zheng ◽  
Shengchao Su

This paper proposes an effective method to elevate the performance of saliency detection via iterative bootstrap learning, which consists of two tasks including saliency optimization and saliency integration. Specifically, first, multiscale segmentation and feature extraction are performed on the input image successively. Second, prior saliency maps are generated using existing saliency models, which are used to generate the initial saliency map. Third, prior maps are fed into the saliency regressor together, where training samples are collected from the prior maps at multiple scales and the random forest regressor is learned from such training data. An integration of the initial saliency map and the output of saliency regressor is deployed to generate the coarse saliency map. Finally, in order to improve the quality of saliency map further, both initial and coarse saliency maps are fed into the saliency regressor together, and then the output of the saliency regressor, the initial saliency map as well as the coarse saliency map are integrated into the final saliency map. Experimental results on three public data sets demonstrate that the proposed method consistently achieves the best performance and significant improvement can be obtained when applying our method to existing saliency models.


2021 ◽  
Author(s):  
Khalil Boukthir ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
habib dhahri ◽  
Adel Alimi

<div>- A novel approach is presented to reduced annotation based on Deep Active Learning for Arabic text detection in Natural Scene Images.</div><div>- A new Arabic text images dataset (7k images) using the Google Street View service named TSVD.</div><div>- A new semi-automatic method for generating natural scene text images from the streets.</div><div>- Training samples is reduced to 1/5 of the original training size on average.</div><div>- Much less training data to achieve better dice index : 0.84</div>


2021 ◽  
Author(s):  
Khalil Boukthir ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
habib dhahri ◽  
Adel Alimi

<div>- A novel approach is presented to reduced annotation based on Deep Active Learning for Arabic text detection in Natural Scene Images.</div><div>- A new Arabic text images dataset (7k images) using the Google Street View service named TSVD.</div><div>- A new semi-automatic method for generating natural scene text images from the streets.</div><div>- Training samples is reduced to 1/5 of the original training size on average.</div><div>- Much less training data to achieve better dice index : 0.84</div>


2021 ◽  
Author(s):  
Rakesh Kumar Kumar Raigar ◽  
Shubhangi Srivast ◽  
Hari Niwas Mishra

Abstract The possibility of rapid estimation of moisture, protein, fat, free fatty acid (FFA), and peroxide value (PV) content in peanut kernel was studied by Fourier transform near-infrared spectroscopy (FTNIR) in the diffuse reflectance mode along with chemometric technic. The moisture, fat and protein of fresh and damaged seeds of peanuts ranging from 3 to 9 %, 45 to 57 % and 23 to 27 % respectively, were used for the calibration model building based on partial least squares (PLS) regression. The peanut samples had major peaks at wavenumbers 53.0853, 4954.98, 4464.03, 4070.85, 74.75.63, 8230.21, and 6178.13 in per cm. First and second derivate mathematical preprocessing was also applied in order to eliminate multiple baselines for different chemical quality parameters of peanut. The FFA had the lowest value of calibration and validation errors (0.579 and 0.738) followed by the protein (0.736 and 0.765). The quality of peanut seeds with lowest root mean square error of cross validation of 0.76 and maximum correlation coefficient (R2) of 96.8 was obtained. The comprehensive results signify that FT-NIR spectroscopy can be used for rapid, non-destructive quantification of quality parameters in peanuts.


2021 ◽  
Vol 50 (3) ◽  
pp. 27-28
Author(s):  
Immanuel Trummer

Introduction. We have seen significant advances in the state of the art in natural language processing (NLP) over the past few years [20]. These advances have been driven by new neural network architectures, in particular the Transformer model [19], as well as the successful application of transfer learning approaches to NLP [13]. Typically, training for specific NLP tasks starts from large language models that have been pre-trained on generic tasks (e.g., predicting obfuscated words in text [5]) for which large amounts of training data are available. Using such models as a starting point reduces task-specific training cost as well as the number of required training samples by orders of magnitude [7]. These advances motivate new use cases for NLP methods in the context of databases.


2011 ◽  
Vol 317-319 ◽  
pp. 909-914
Author(s):  
Ying Lan Jiang ◽  
Ruo Yu Zhang ◽  
Jie Yu ◽  
Wan Chao Hu ◽  
Zhang Tao Yin

Agricultural products quality which included intrinsic attribute and extrinsic characteristic, closely related to the health of consumer and the exported cost. Now, imaging (machine vision) and spectrum are two main nondestructive inspection technologies to be applied. Hyperspectral imaging, a new emerging technology developed for detecting quality of the food and agricultural products in recent years, combined techniques of conventional imaging and spectroscopy to obtain both spatial and spectral information from an objective simultaneously. This paper compared the advantage and disadvantage of imaging, spectrum and hyperspectral imaging technique, and provided a description to basic principle, feature of hyperspectral imaging system and calibration of hyperspectral reflectance images. In addition, the recent advances for the application of hyperspectral imaging to agricultural products quality inspection were reviewed in other countries and China.


2019 ◽  
Vol 809 ◽  
pp. 610-614
Author(s):  
Moritz Salzmann ◽  
Ralf Schledjewski

The quality of composite materials based on natural fibres is highly influence by humidity content of the fibres. For a high product quality in the resin-transfer-moulding (RTM) process a constant humidity content has to be achieved. As the humidity content of the fibres can change relative quickly depending on the humidity, measuring humidity content in the mould is beneficial. Near-Infrared-Spectroscopy (NIR) is a widely used tool for humidity content measurement allowing determination of the moisture content within seconds. To do so a calibration model with good accuracy is required. To generate the calibration model a dry flax woven fabric is placed in a climate chamber and weight change is recorded as well as NIR-Spectra. By correlating the spectra with the weight increase a model can be developed allowing to assign the spectra with unknown weight. This allows not only monitoring the moisture content of natural fibres with in the mould. Also can the moisture content reduced to an aspired value by applying vacuum to the preheated mould, before starting the resin infusion.


Author(s):  
Ina Vernikouskaya ◽  
Dagmar Bertsche ◽  
Tillman Dahme ◽  
Volker Rasche

Abstract Purpose Automatic identification of interventional devices in X-ray (XR) fluoroscopy offers the potential of improved navigation during transcatheter endovascular procedures. This paper presents a prototype implementation of fully automatic 3D reconstruction of a cryo-balloon catheter during pulmonary vein isolation (PVI) procedures by deep learning approaches. Methods We employ convolutional neural networks (CNN) to automatically identify the cryo-balloon XR marker and catheter shaft in 2D fluoroscopy during PVI. Training data are generated exploiting established semiautomatic techniques, including template-matching and analytical graph building. A first network of U-net architecture uses a single grayscale XR image as input and yields the mask of the XR marker. A second network of the similar architecture is trained using the mask of the XR marker as additional input to the grayscale XR image for the segmentation of the cryo-balloon catheter shaft mask. The structures automatically identified in two 2D images with different angulations are then used to reconstruct the cryo-balloon in 3D. Results Automatic identification of the XR marker was successful in 78% of test cases and in 100% for the catheter shaft. Training of the model for prediction of the XR marker mask was successful with 3426 training samples. Incorporation of the XR marker mask as additional input for the model predicting the catheter shaft allowed to achieve good training result with only 805 training samples. The average prediction time per frame was 14.47 ms for the XR marker and 78.22 ms for the catheter shaft. Localization accuracy for the XR marker yielded on average 1.52 pixels or 0.56 mm. Conclusions In this paper, we report a novel method for automatic detection and 3D reconstruction of the cryo-balloon catheter shaft and marker from 2D fluoroscopic images. Initial evaluation yields promising results thus indicating the high potential of CNNs as alternatives to the current state-of-the-art solutions.


Sign in / Sign up

Export Citation Format

Share Document