CLASSIFYING APPLES BY THE MEANS OF FLUORESCENCE IMAGING

Author(s):  
MARIUS C. CODREA ◽  
OLLI S. NEVALAINEN ◽  
ESA TYYSTJÄRVI ◽  
MARTIN VANDEVEN ◽  
ROLAND VALCKE

Classification of harvested apples when predicting their storage potential is an important task. This paper describes how chlorophyll a fluorescence images taken in blue light through a red filter, can be used to classify apples. In such an image, fluorescence appears as a relatively homogenous area broken by a number of small nonfluorescing spots, corresponding to normal corky tissue patches, lenticells, and to damaged areas that lower the quality of the apple. The damaged regions appear more longish, curved or boat-shaped compared to the roundish, regular lenticells. We propose an apple classification method that employs a hierarchy of two neural networks. The first network classifies each spot according to geometrical criteria and the second network uses this information together with global attributes to classify the apple. The system reached 95% accuracy using a test material classified by an expert for "bad" and "good" apples.

2009 ◽  
Vol 36 (11) ◽  
pp. 874 ◽  
Author(s):  
Atsumi Konishi ◽  
Akira Eguchi ◽  
Fumiki Hosoi ◽  
Kenji Omasa

Spatio–temporal effects of herbicide including 3-(3,4 dichlorophenyl)-1,1-dimethylurea (DCMU) on a whole melon (Cucumis melo L.) plant were three-dimensionally monitored using combined range and chlorophyll a fluorescence imaging. The herbicide was treated to soil in a pot and the changes in chlorophyll a fluorescence images of the plant were captured over time. The time series of chlorophyll fluorescence images were combined with 3D polygon model of the whole plant taken by a high-resolution portable scanning lidar. From the produced 3D chlorophyll fluorescence model, it was observed that the increase of chlorophyll fluorescence appeared along veins of leaves and gradually expanded to mesophylls. In addition, it was found by detailed analysis of the images that the invisible herbicide injury on the mature leaves occurred earlier and more severely than on the young and old leaves. The distance from veins, whole leaf area and leaf inclination influenced the extent of the injury within the leaves. These results indicated difference in uptake of herbicide in the plant from soil depends on structural parameters of leaves and the microenvironments as well as leaf age. The findings showed that 3D monitoring using combined range and chlorophyll a fluorescence imaging can be utilised for understanding spatio-temporal changes of herbicide effects on a whole plant.


Author(s):  
Biluo Shen ◽  
Zhe Zhang ◽  
Xiaojing Shi ◽  
Caiguang Cao ◽  
Zeyu Zhang ◽  
...  

Abstract Purpose Surgery is the predominant treatment modality of human glioma but suffers difficulty on clearly identifying tumor boundaries in clinic. Conventional practice involves neurosurgeon’s visual evaluation and intraoperative histological examination of dissected tissues using frozen section, which is time-consuming and complex. The aim of this study was to develop fluorescent imaging coupled with artificial intelligence technique to quickly and accurately determine glioma in real-time during surgery. Methods Glioma patients (N = 23) were enrolled and injected with indocyanine green for fluorescence image–guided surgery. Tissue samples (N = 1874) were harvested from surgery of these patients, and the second near-infrared window (NIR-II, 1000–1700 nm) fluorescence images were obtained. Deep convolutional neural networks (CNNs) combined with NIR-II fluorescence imaging (named as FL-CNN) were explored to automatically provide pathological diagnosis of glioma in situ in real-time during patient surgery. The pathological examination results were used as the gold standard. Results The developed FL-CNN achieved the area under the curve (AUC) of 0.945. Comparing to neurosurgeons’ judgment, with the same level of specificity >80%, FL-CNN achieved a much higher sensitivity (93.8% versus 82.0%, P < 0.001) with zero time overhead. Further experiments demonstrated that FL-CNN corrected >70% of the errors made by neurosurgeons. FL-CNN was also able to rapidly predict grade and Ki-67 level (AUC 0.810 and 0.625) of tumor specimens intraoperatively. Conclusion Our study demonstrates that deep CNNs are better at capturing important information from fluorescence images than surgeons’ evaluation during patient surgery. FL-CNN is highly promising to provide pathological diagnosis intraoperatively and assist neurosurgeons to obtain maximum resection safely. Trial registration ChiCTR ChiCTR2000029402. Registered 29 January 2020, retrospectively registered


2018 ◽  
Vol 166 (11-12) ◽  
pp. 782-789
Author(s):  
Alessandro A. Fortunato ◽  
Daniel Debona ◽  
Carlos E. Aucique-Pérez ◽  
Emerson Fialho Corrêa ◽  
Fabrício A. Rodrigues

2015 ◽  
Vol 163 (11-12) ◽  
pp. 968-977 ◽  
Author(s):  
Jaime Honorato Júnior ◽  
Laércio Zambolim ◽  
Henrique Silva Silveira Duarte ◽  
Carlos Eduardo Aucique-Pérez ◽  
Fabrício Ávila Rodrigues

1997 ◽  
Vol 08 (01) ◽  
pp. 137-144 ◽  
Author(s):  
N. W. Campbell ◽  
B. T. Thomas ◽  
T. Troscianko

The paper describes how neural networks may be used to segment and label objects in images. A self-organising feature map is used for the segmentation phase, and we quantify the quality of the segmentations produced as well as the contribution made by colour and texture features. A multi-layer perceptron is trained to label the regions produced by the segmentation process. It is shown that 91.1% of the image area is correctly classified into one of eleven categories which include cars, houses, fences, roads, vegetation and sky.


Sign in / Sign up

Export Citation Format

Share Document