image dataset
Recently Published Documents


TOTAL DOCUMENTS

391
(FIVE YEARS 273)

H-INDEX

19
(FIVE YEARS 6)

2022 ◽  
pp. 1-17
Author(s):  
Saleh Albahli ◽  
Ghulam Nabi Ahmad Hassan Yar

Diabetic retinopathy is an eye deficiency that affects retina as a result of the patient having diabetes mellitus caused by high sugar levels, which may eventually lead to macular edema. The objective of this study is to design and compare several deep learning models that detect severity of diabetic retinopathy, determine risk of leading to macular edema, and segment different types of disease patterns using retina images. Indian Diabetic Retinopathy Image Dataset (IDRiD) dataset was used for disease grading and segmentation. Since images of the dataset have different brightness and contrast, we employed three techniques for generating processed images from the original images, which include brightness, color and, contrast (BCC) enhancing, color jitters (CJ), and contrast limited adaptive histogram equalization (CLAHE). After image preporcessing, we used pre-trained ResNet50, VGG16, and VGG19 models on these different preprocessed images both for determining the severity of the retinopathy and also the chances of macular edema. UNet was also applied to segment different types of diseases. To train and test these models, image dataset was divided into training, testing, and validation data at 70%, 20%, and 10% ratios, respectively. During model training, data augmentation method was also applied to increase the number of training images. Study results show that for detecting the severity of retinopathy and macular edema, ResNet50 showed the best accuracy using BCC and original images with an accuracy of 60.2% and 82.5%, respectively, on validation dataset. In segmenting different types of diseases, UNet yielded the highest testing accuracy of 65.22% and 91.09% for microaneurysms and hard exudates using BCC images, 84.83% for optic disc using CJ images, 59.35% and 89.69% for hemorrhages and soft exudates using CLAHE images, respectively. Thus, image preprocessing can play an important role to improve efficacy and performance of deep learning models.


Author(s):  
D. L. Abeywardhana ◽  
C. D. Dangalle ◽  
Anupiya Nugaliyadde ◽  
Yashas Mallawarachchi

2022 ◽  
pp. 120-130
Author(s):  
Udaya C. S. ◽  
Usharani M.

In this world there are thousands of plant species available, and plants have medicinal values. Medicinal plants play a very active role in healthcare traditions. Ayurveda is one of the oldest systems of medicinal science that is used even today. So proper identification of the medicinal plants has major benefits for not only manufacturing medicines but also for forest department peoples, life scientists, physicians, medication laboratories, government, and the public. The manual method is good for identifying plants easily, but is usually done by the skilled practitioners who have achieved expertise in this field. However, it is time consuming. There may be chances to misidentification, which leads to certain side effects and may lead to serious problems. This chapter focuses on creation of image dataset by using a mobile-based tool for image acquisition, which helps to capture the structured images, and reduces the effort of data cleaning. This chapter also suggests that by ANN, CNN, or PNN classifier, the classification can be done accurately.


Author(s):  
Weiming Hu ◽  
Chen Li ◽  
Xiaoyan Li ◽  
Md Mamunur Rahaman ◽  
Jiquan Ma ◽  
...  

2022 ◽  
Vol 14 (1) ◽  
pp. 180
Author(s):  
Fang Zhou ◽  
Fengjie He ◽  
Changchun Gui ◽  
Zhangyu Dong ◽  
Mengdao Xing

A target detection method based on an improved single shot multibox detector (SSD) is proposed to solve insufficient training samples for synthetic aperture radar (SAR) target detection. We propose two strategies to improve the SSD: model structure optimization and small sample augmentation. For model structure optimization, the first approach is to extract deep features of the target with residual networks instead of with VGGNet. Then, the aspect ratios of the default boxes are redesigned to match the different targets’ sizes. For small sample augmentation, besides the routine image processing methods, such as rotating, translating, and mirroring, enough training samples are obtained based on the saliency map theory in machine vision. Lastly, a simulated SAR image dataset called Geometric Objects (GO) is constructed, which contains dihedral angles, surface plates and cylinders. The experimental results on the GO-simulated image dataset and the MSTAR real image dataset demonstrate that the proposed method has better performance in SAR target detection than other detection methods.


2021 ◽  
Vol 48 (12) ◽  
pp. 1329-1334
Author(s):  
Wonseok Oh ◽  
Kangmin Bae ◽  
Yuseok Bae

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3183
Author(s):  
Cheng Li ◽  
Fei Miao ◽  
Gang Gao

Deep Neural Networks (DNNs) are commonly used methods in computational intelligence. Most prevalent DNN-based image classification methods are dedicated to promoting the performance by designing complicated network architectures and requiring large amounts of model parameters. These large-scale DNN-based models are performed on all images consistently. However, since there are meaningful differences between images, it is difficult to accurately classify all images by a consistent network architecture. For example, a deeper network is fit for the images that are difficult to be distinguished, but may lead to model overfitting for simple images. Therefore, we should selectively use different models to deal with different images, which is similar to the human cognition mechanism, in which different levels of neurons are activated according to the difficulty of object recognition. To this end, we propose a Hierarchical Convolutional Neural Network (HCNN) for image classification in this paper. HCNNs comprise multiple sub-networks, which can be viewed as different levels of neurons in humans, and these sub-networks are used to classify the images progressively. Specifically, we first initialize the weight of each image and each image category, and these images and initial weights are used for training the first sub-network. Then, according to the predicted results of the first sub-network, the weights of misclassified images are increased, while the weights of correctly classified images are decreased. Furthermore, the images with the updated weights are used for training the next sub-networks. Similar operations are performed on all sub-networks. In the test stage, each image passes through the sub-networks in turn. If the prediction confidences in a sub-network are higher than a given threshold, then the results are output directly. Otherwise, deeper visual features need to be learned successively by the subsequent sub-networks until a reliable image classification result is obtained or the last sub-network is reached. Experimental results show that HCNNs can obtain better results than classical CNNs and the existing models based on ensemble learning. HCNNs have 2.68% higher accuracy than Residual Network 50 (Resnet50) on the ultrasonic image dataset, 1.19% than Resnet50 on the chimpanzee facial image dataset, and 10.86% than Adaboost-CNN on the CIFAR-10 dataset. Furthermore, the HCNN is extensible, since the types of sub-networks and their combinations can be dynamically adjusted.


Medicina ◽  
2021 ◽  
Vol 57 (12) ◽  
pp. 1378
Author(s):  
Miguel Mascarenhas Saraiva ◽  
Tiago Ribeiro ◽  
João Afonso ◽  
Patrícia Andrade ◽  
Pedro Cardoso ◽  
...  

Background and Objectives: Device-assisted enteroscopy (DAE) allows deep exploration of the small bowel and combines diagnostic and therapeutic capacities. Suspected mid-gastrointestinal bleeding is the most frequent indication for DAE, and vascular lesions, particularly angioectasia, are the most common etiology. Nevertheless, the diagnostic yield of DAE for the detection of these lesions is suboptimal. Deep learning algorithms have shown great potential for automatic detection of lesions in endoscopy. We aimed to develop an artificial intelligence (AI) model for the automatic detection of angioectasia DAE images. Materials and Methods: A convolutional neural network (CNN) was developed using DAE images. Each frame was labeled as normal/mucosa or angioectasia. The image dataset was split for the constitution of training and validation datasets. The latter was used for assessing the performance of the CNN. Results: A total of 72 DAE exams were included, and 6740 images were extracted (5345 of normal mucosa and 1395 of angioectasia). The model had a sensitivity of 88.5%, a specificity of 97.1% and an AUC of 0.988. The image processing speed was 6.4 ms/frame. Conclusions: The application of AI to DAE may have a significant impact on the management of patients with suspected mid-gastrointestinal bleeding.


PhytoKeys ◽  
2021 ◽  
Vol 187 ◽  
pp. 93-128
Author(s):  
Peter Wilf ◽  
Scott L. Wing ◽  
Herbert W. Meyer ◽  
Jacob A. Rose ◽  
Rohit Saha ◽  
...  

Leaves are the most abundant and visible plant organ, both in the modern world and the fossil record. Identifying foliage to the correct plant family based on leaf architecture is a fundamental botanical skill that is also critical for isolated fossil leaves, which often, especially in the Cenozoic, represent extinct genera and species from extant families. Resources focused on leaf identification are remarkably scarce; however, the situation has improved due to the recent proliferation of digitized herbarium material, live-plant identification applications, and online collections of cleared and fossil leaf images. Nevertheless, the need remains for a specialized image dataset for comparative leaf architecture. We address this gap by assembling an open-access database of 30,252 images of vouchered leaf specimens vetted to family level, primarily of angiosperms, including 26,176 images of cleared and x-rayed leaves representing 354 families and 4,076 of fossil leaves from 48 families. The images maintain original resolution, have user-friendly filenames, and are vetted using APG and modern paleobotanical standards. The cleared and x-rayed leaves include the Jack A. Wolfe and Leo J. Hickey contributions to the National Cleared Leaf Collection and a collection of high-resolution scanned x-ray negatives, housed in the Division of Paleobotany, Department of Paleobiology, Smithsonian National Museum of Natural History, Washington D.C.; and the Daniel I. Axelrod Cleared Leaf Collection, housed at the University of California Museum of Paleontology, Berkeley. The fossil images include a sampling of Late Cretaceous to Eocene paleobotanical sites from the Western Hemisphere held at numerous institutions, especially from Florissant Fossil Beds National Monument (late Eocene, Colorado), as well as several other localities from the Late Cretaceous to Eocene of the Western USA and the early Paleogene of Colombia and southern Argentina. The dataset facilitates new research and education opportunities in paleobotany, comparative leaf architecture, systematics, and machine learning.


Sign in / Sign up

Export Citation Format

Share Document