scholarly journals Precise Estimation of NDVI with a Simple NIR Sensitive RGB Camera and Machine Learning Methods for Corn Plants

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3208 ◽  
Author(s):  
Liangju Wang ◽  
Yunhong Duan ◽  
Libo Zhang ◽  
Tanzeel U. Rehman ◽  
Dongdong Ma ◽  
...  

The normalized difference vegetation index (NDVI) is widely used in remote sensing to monitor plant growth and chlorophyll levels. Usually, a multispectral camera (MSC) or hyperspectral camera (HSC) is required to obtain the near-infrared (NIR) and red bands for calculating NDVI. However, these cameras are expensive, heavy, difficult to geo-reference, and require professional training in imaging and data processing. On the other hand, the RGBN camera (NIR sensitive RGB camera, simply modified from standard RGB cameras by removing the NIR rejection filter) have also been explored to measure NDVI, but the results did not exactly match the NDVI from the MSC or HSC solutions. This study demonstrates an improved NDVI estimation method with an RGBN camera-based imaging system (Ncam) and machine learning algorithms. The Ncam consisted of an RGBN camera, a filter, and a microcontroller with a total cost of only $70 ~ 85. This new NDVI estimation solution was compared with a high-end hyperspectral camera in an experiment with corn plants under different nitrogen and water treatments. The results showed that the Ncam with two-band-pass filter achieved high performance (R2 = 0.96, RMSE = 0.0079) at estimating NDVI with the machine learning model. Additional tests showed that besides NDVI, this low-cost Ncam was also capable of predicting corn plant nitrogen contents precisely. Thus, Ncam is a potential option for MSC and HSC in plant phenotyping projects.

2021 ◽  
Author(s):  
Aadhav Prabu

<p>Cardiopulmonary diseases are leading causes of death worldwide, accounting for nearly 15 million deaths annually. Accurate diagnosis and routine monitoring of these diseases by auscultation are crucial for early intervention and treatment. However, auscultation using a conventional stethoscope is low in amplitude and subjective, leading to possible missed or delayed treatment. My research aimed to develop a stethoscope called SmartScope powered by machine-learning to aid physicians in rapid analysis, confirmation, and augmentation of cardiopulmonary auscultation. Additionally, SmartScope helps patients take personalized auscultation readings at home effectively as it performs an intelligent selection of auscultation points interactively and quickly using the reinforcement learning agent: Deep Q-Network. SmartScope consists of a Raspberry Pi-enabled device, machine-learning models, and an iOS app. Users initiate the auscultation process through the app. The app communicates with the device using MQTT messaging to record the auscultation, which is augmented by an active band-pass filter and an amplifier. Additionally, the auscultation readings are refined by a Gaussian-shaped frequency filter and segmented by a Long Short-Term Memory Network. The readings are then classified using two Convolutional Recurrent Neural Networks. The results are displayed within the app and LCD. After the machine-learning models were trained, 90% accuracy for cardiopulmonary diseases was achieved, and the number of auscultation points was reduced threefold. SmartScope is an affordable, comprehensive, and user-friendly device that patients and physicians can widely use to monitor and accurately diagnose diseases like COPD, COVID-19, Asthma, and Heart Murmur instantaneously, as time is a critical factor in saving lives.</p>


Drones ◽  
2020 ◽  
Vol 4 (3) ◽  
pp. 45
Author(s):  
Maria Angela Musci ◽  
Luigi Mazzara ◽  
Andrea Maria Lingua

Aircraft ground de-icing operations play a critical role in flight safety. However, to handle the aircraft de-icing, a considerable quantity of de-icing fluids is commonly employed. Moreover, some pre-flight inspections are carried out with engines running; thus, a large amount of fuel is wasted, and CO2 is emitted. This implies substantial economic and environmental impacts. In this context, the European project (reference call: MANUNET III 2018, project code: MNET18/ICT-3438) called SEI (Spectral Evidence of Ice) aims to provide innovative tools to identify the ice on aircraft and improve the efficiency of the de-icing process. The project includes the design of a low-cost UAV (uncrewed aerial vehicle) platform and the development of a quasi-real-time ice detection methodology to ensure a faster and semi-automatic activity with a reduction of applied operating time and de-icing fluids. The purpose of this work, developed within the activities of the project, is defining and testing the most suitable sensor using a radiometric approach and machine learning algorithms. The adopted methodology consists of classifying ice through spectral imagery collected by two different sensors: multispectral and hyperspectral camera. Since the UAV prototype is under construction, the experimental analysis was performed with a simulation dataset acquired on the ground. The comparison among the two approaches, and their related algorithms (random forest and support vector machine) for image processing, was presented: practical results show that it is possible to identify the ice in both cases. Nonetheless, the hyperspectral camera guarantees a more reliable solution reaching a higher level of accuracy of classified iced surfaces.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Omer F. Akmese ◽  
Gul Dogan ◽  
Hakan Kor ◽  
Hasan Erbay ◽  
Emre Demir

Acute appendicitis is one of the most common emergency diseases in general surgery clinics. It is more common, especially between the ages of 10 and 30 years. Additionally, approximately 7% of the entire population is diagnosed with acute appendicitis at some time in their lives and requires surgery. The study aims to develop an easy, fast, and accurate estimation method for early acute appendicitis diagnosis using machine learning algorithms. Retrospective clinical records were analyzed with predictive data mining models. The predictive success of the models obtained by various machine learning algorithms was compared. A total of 595 clinical records were used in the study, including 348 males (58.49%) and 247 females (41.51%). It was found that the gradient boosted trees algorithm achieves the best success with an accurate prediction success of 95.31%. In this study, an estimation method based on machine learning was developed to identify individuals with acute appendicitis. It is thought that this method will benefit patients with signs of appendicitis, especially in emergency departments in hospitals.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 15503-15503
Author(s):  
T. E. Johnson ◽  
G. A. Luiken ◽  
M. M. Quigley ◽  
M. Xu ◽  
R. M. Hoffman

15503 Background: Surgery for medullary carcinoma of the thyroid can at times be technically challenging to the surgeon. Inducing the cancer cells to be fluorescent would have the potential to improve the surgeon’s ability to quickly and accurately identify and excise all of the malignant tissue. We have previously demonstrated the feasibility of induced tumor fluorescence with fluorophor-tagged anti-tumor antigen antibodies using human colon and breast cancer cell lines. We present here our results using a human medullary carcinoma of the thyroid cell line in the nude mouse model. Methods: A human medullary carcinoma of the thyroid cell line that was demonstrated to express CA 15–3 was used. Thyroid carcinoma cells were subcutaneously implanted in 4 nude mice (3 study mice and 1 control mice). Three weeks after injection, tumor nodules were easily detectable. Using the tail vein method, 3 study mice were injected with fluorophore-tagged anti-CA 15–3 and 1 control mouse with fluorophore-tagged IgG. Mice were examined using a small animal imaging system with a 470 nm light source and appropriate filters. They were also examined using a simple blue LED flashlight fitted with a fixed 470 nm band pass filter for illumination and were observed through filtered goggles. Results: Fluorescence of tumor nodules in the study mice could be seen through the skin. On dissection and exposure of the tumor nodules, this fluorescence was intense and clearly distinguishable from the surrounding normal tissue using either the imaging system or the blue LED. The control mouse injected with fluorophore-tagged IgG and examined in a similar manner revealed no tumor fluorescence. Conclusions: When tumor antigens are known, fluorophore-tagged antibody induced fluorescence is simple, easy to perform, requires no technically complex equipment or operator expertise and could be adapted to thyroid cancer surgery in the academic or community hospital setting. This technology would be indicated in those patients undergoing initial resection of medullary carcinoma of the thyroid as well as in those patients undergoing resection of recurrent disease where accurate identification of tumor tissue may be more difficult and time consuming. No significant financial relationships to disclose.


2006 ◽  
Vol 6 ◽  
pp. 691-699 ◽  
Author(s):  
Naoki Saitoh ◽  
Norimitsu Akiba

We studied fluorescence imaging of fingerprints on a high-grade white paper in the deep ultraviolet (UV) region with a nanosecond-pulsed Nd-YAG laser system that consists of a tunable laser and a cooled CCD camera.Clear fluorescence images were obtained by time-resolved imaging with a 255- to 425-nm band-pass filter, which cuts off strong fluorescence of papers. Although fluorescence can be imaged with any excitation wavelength between 220 and 290 nm, 230 and 280 nm are the best in terms of image quality. However, the damage due to laser illumination was smaller for 266-nm excitation than 230- or 280-nm excitation.Absorption images of latent fingerprints on a high-grade white paper are also obtained with our imaging system using 215- to 280-nm laser light. Shorter wavelengths produce better images and the best image was obtained with 215 nm. Absorption images are also degraded slightly by laser illumination, but their damage is smaller than that of fluorescence images.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 686
Author(s):  
Suliman Mohamed Fati ◽  
Amgad Muneer ◽  
Nur Arifin Akbar ◽  
Shakirah Mohd Taib

High blood pressure (BP) may lead to further health complications if not monitored and controlled, especially for critically ill patients. Particularly, there are two types of blood pressure monitoring, invasive measurement, whereby a central line is inserted into the patient’s body, which is associated with infection risks. The second measurement is cuff-based that monitors BP by detecting the blood volume change at the skin surface using a pulse oximeter or wearable devices such as a smartwatch. This paper aims to estimate the blood pressure using machine learning from photoplethysmogram (PPG) signals, which is obtained from cuff-based monitoring. To avoid the issues associated with machine learning such as improperly choosing the classifiers and/or not selecting the best features, this paper utilized the tree-based pipeline optimization tool (TPOT) to automate the machine learning pipeline to select the best regression models for estimating both systolic BP (SBP) and diastolic BP (DBP) separately. As a pre-processing stage, notch filter, band-pass filter, and zero phase filtering were applied by TPOT to eliminate any potential noise inherent in the signal. Then, the automated feature selection was performed to select the best features to estimate the BP, including SBP and DBP features, which are extracted using random forest (RF) and k-nearest neighbors (KNN), respectively. To train and test the model, the PhysioNet global dataset was used, which contains 32.061 million samples for 1000 subjects. Finally, the proposed approach was evaluated and validated using the mean absolute error (MAE). The results obtained were 6.52 mmHg for SBS and 4.19 mmHg for DBP, which show the superiority of the proposed model over the related works.


Agronomy ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 583
Author(s):  
Ting Zhang ◽  
Yanbo Huang ◽  
Krishna N. Reddy ◽  
Pingting Yang ◽  
Xiaohu Zhao ◽  
...  

Glyphosate is the most widely used herbicide in crop production due to the widespread adoption of glyphosate-resistant (GR) crops. However, the spray of glyphosate onto non-target crops from ground or aerial applications can cause severe injury to non-GR corn plants. To evaluate the crop damage of the non-GR corn plants from glyphosate and the recoverability of the damaged plants, we used the hyperspectral imaging (HSI) technique in field experiments with different glyphosate application rates. This study investigated the spectral characteristic of corn plants and assessed the corn plant damage from glyphosate. Based on HSI image analysis, a spectral variation pattern was observed at 1 week after treatment (WAT), 2 WAT, and 3 WAT from the glyphosate-treated non-GR corn plants. It was further found that the corn plants treated with glyphosate rates equal to or higher than 0.5X (X = 0.866 kilograms acid equivalents/hectare (kg ae/ha) represents the recommended spray rate for GR corn) would suffer unrecoverable damage. Using the Jeffries–Matusita distance as the spectral sensitivity criterion, three sensitive bands from the measured spectra were selected to create two spectral indices for crop recoverability differentiation in band ratio and normalization forms, respectively. With the two spectral indices, the corn plants recoverable and unrecoverable from damage were classified with an overall accuracy greater than 95%. Then, three machine learning algorithms (k-nearest neighbors, random forest, and support vector machine) were respectively combined with the successive projections algorithm to create models to relate selected feature spectral bands to glyphosate spray rates. The results indicated that the models achieved reasonable accuracy, especially in the group of recoverable plants. This study illustrated the potential of the hyperspectral imaging technique for evaluating crop damage from herbicides and recoverability of the injured plants using different data analysis and machine learning modeling approaches for practical weed management in crop fields.


2018 ◽  
Vol 58 (8) ◽  
pp. 1488 ◽  
Author(s):  
S. Rahman ◽  
P. Quin ◽  
T. Walsh ◽  
T. Vidal-Calleja ◽  
M. J. McPhee ◽  
...  

The objectives of the present study were to describe the approach used for classifying surface tissue, and for estimating fat depth in lamb short loins and validating the approach. Fat versus non-fat pixels were classified and then used to estimate the fat depth for each pixel in the hyperspectral image. Estimated reflectance, instead of image intensity or radiance, was used as the input feature for classification. The relationship between reflectance and the fat/non-fat classification label was learnt using support vector machines. Gaussian processes were used to learn regression for fat depth as a function of reflectance. Data to train and test the machine learning algorithms was collected by scanning 16 short loins. The near-infrared hyperspectral camera captured lines of data of the side of the short loin (i.e. with the subcutaneous fat facing the camera). Advanced single-lens reflex camera took photos of the same cuts from above, such that a ground truth of fat depth could be semi-automatically extracted and associated with the hyperspectral data. A subset of the data was used to train the machine learning model, and to test it. The results of classifying pixels as either fat or non-fat achieved a 96% accuracy. Fat depths of up to 12 mm were estimated, with an R2 of 0.59, a mean absolute bias of 1.72 mm and root mean square error of 2.34 mm. The techniques developed and validated in the present study will be used to estimate fat coverage to predict total fat, and, subsequently, lean meat yield in the carcass.


2021 ◽  
Author(s):  
Aadhav Prabu

<p>Cardiopulmonary diseases are leading causes of death worldwide, accounting for nearly 15 million deaths annually. Accurate diagnosis and routine monitoring of these diseases by auscultation are crucial for early intervention and treatment. However, auscultation using a conventional stethoscope is low in amplitude and subjective, leading to possible missed or delayed treatment. My research aimed to develop a stethoscope called SmartScope powered by machine-learning to aid physicians in rapid analysis, confirmation, and augmentation of cardiopulmonary auscultation. Additionally, SmartScope helps patients take personalized auscultation readings at home effectively as it performs an intelligent selection of auscultation points interactively and quickly using the reinforcement learning agent: Deep Q-Network. SmartScope consists of a Raspberry Pi-enabled device, machine-learning models, and an iOS app. Users initiate the auscultation process through the app. The app communicates with the device using MQTT messaging to record the auscultation, which is augmented by an active band-pass filter and an amplifier. Additionally, the auscultation readings are refined by a Gaussian-shaped frequency filter and segmented by a Long Short-Term Memory Network. The readings are then classified using two Convolutional Recurrent Neural Networks. The results are displayed within the app and LCD. After the machine-learning models were trained, 90% accuracy for cardiopulmonary diseases was achieved, and the number of auscultation points was reduced threefold. SmartScope is an affordable, comprehensive, and user-friendly device that patients and physicians can widely use to monitor and accurately diagnose diseases like COPD, COVID-19, Asthma, and Heart Murmur instantaneously, as time is a critical factor in saving lives.</p>


Sign in / Sign up

Export Citation Format

Share Document