scholarly journals A SARS-CoV-2 Microscopic Image Dataset with Ground Truth Images and Visual Features

Author(s):  
Chen Li ◽  
Jiawei Zhang ◽  
Frank Kulwa ◽  
Shouliang Qi ◽  
Ziyu Qi
2019 ◽  
Vol 77 (4) ◽  
pp. 1427-1439 ◽  
Author(s):  
Qiong Li ◽  
Xin Sun ◽  
Junyu Dong ◽  
Shuqun Song ◽  
Tongtong Zhang ◽  
...  

Abstract Phytoplankton plays an important role in marine ecological environment and aquaculture. However, the recognition and detection of phytoplankton rely on manual operations. As the foundation of achieving intelligence and releasing human labour, a phytoplankton microscopic image dataset PMID2019 for phytoplankton automated detection is presented. The PMID2019 dataset contains 10 819 phytoplankton microscopic images of 24 different categories. We leverage microscopes to collect images of phytoplankton in the laboratory environment. Each object in the images is manually labelled with a bounding box and category of ground-truth. In addition, living cells move quickly making it difficult to capture images of them. In order to generalize the dataset for in situ applications, we further utilize Cycle-GAN to achieve the domain migration between dead and living cell samples. We built a synthetic dataset to generate the corresponding living cell samples from the original dead ones. The PMID2019 dataset will not only benefit the development of phytoplankton microscopic vision technology in the future, but also can be widely used to assess the performance of the state-of-the-art object detection algorithms for phytoplankton recognition. Finally, we illustrate the performances of some state-of-the-art object detection algorithms, which may provide new ideas for monitoring marine ecosystems.


2020 ◽  
Vol 64 (5) ◽  
pp. 50411-1-50411-8
Author(s):  
Hoda Aghaei ◽  
Brian Funt

Abstract For research in the field of illumination estimation and color constancy, there is a need for ground-truth measurement of the illumination color at many locations within multi-illuminant scenes. A practical approach to obtaining such ground-truth illumination data is presented here. The proposed method involves using a drone to carry a gray ball of known percent surface spectral reflectance throughout a scene while photographing it frequently during the flight using a calibrated camera. The captured images are then post-processed. In the post-processing step, machine vision techniques are used to detect the gray ball within each frame. The camera RGB of light reflected from the gray ball provides a measure of the illumination color at that location. In total, the dataset contains 30 scenes with 100 illumination measurements on average per scene. The dataset is available for download free of charge.


Author(s):  
Abdullah Alfarrarjeh ◽  
Seon Ho Kim ◽  
Arvind Bright ◽  
Vinuta Hegde ◽  
Akshansh Akshansh ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 189436-189444 ◽  
Author(s):  
Yubin Qi ◽  
Jing Zhao ◽  
Yongan Shi ◽  
Guilai Zuo ◽  
Haonan Zhang ◽  
...  

2007 ◽  
Vol 07 (02) ◽  
pp. 211-225
Author(s):  
XUELONG LI ◽  
JING LI ◽  
DACHENG TAO ◽  
YUAN YUAN

Similarity metric is a key component in query-by-example image searching with visual features. After extraction of image visual features, the scheme of computing their similarities can affect the system performance dramatically — the image searching results are normally displayed in decreasing order of similarity (alternatively, increasing order of distance) on the graphical interface for end users. Unfortunately, conventional similarity metrics, in image searching with visual features, usually encounter several difficulties, namely, lighting, background, and viewpoint problems. From the signal processing point of view, this paper introduces a novel similarity metric and therefore reduces the above three problems to some extent. The effectiveness of this newly developed similarity metric is demonstrated by a set of experiments upon a small image ground truth.


2020 ◽  
Vol 138 ◽  
pp. 370-377 ◽  
Author(s):  
Juncheng Zhang ◽  
Qingmin Liao ◽  
Shaojun Liu ◽  
Haoyu Ma ◽  
Wenming Yang ◽  
...  

Author(s):  
Atif Nazir ◽  
Kashif Nazir

Due to an increase in the number of image achieves, Content-Based Image Retrieval (CBIR) has gained attention for research community of computer vision. The image visual contents are represented in a feature space in the form of numerical values that is considered as a feature vector of image. Images belonging to different classes may contain the common visuals and shapes that can result in the closeness of computed feature space of two different images belonging to separate classes. Due to this reason, feature extraction and image representation is selected with appropriate features as it directly affects the performance of image retrieval system. The commonly used visual features are image spatial layout, color, texture and shape. Image feature space is combined to achieve the discriminating ability that is not possible to achieve when the features are used separately. Due to this reason, in this paper, we aim to explore the  low-level feature combination that are based on color and shape features. We selected color moments and color histogram to represent color while shape is represented by using invariant moments. We selected this combination, as these features are reported intuitive, compact and robust for image representation. We evaluated the performance of our proposed research by using the Corel, Coil and Ground Truth (GT) image datasets. We evaluated the proposed low-level feature fusion by calculating the precision, recall and time required for feature extraction. The precision, recall and feature extraction values obtained from the proposed low-level feature fusion outperforms the existing research of CBIR.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3183
Author(s):  
Cheng Li ◽  
Fei Miao ◽  
Gang Gao

Deep Neural Networks (DNNs) are commonly used methods in computational intelligence. Most prevalent DNN-based image classification methods are dedicated to promoting the performance by designing complicated network architectures and requiring large amounts of model parameters. These large-scale DNN-based models are performed on all images consistently. However, since there are meaningful differences between images, it is difficult to accurately classify all images by a consistent network architecture. For example, a deeper network is fit for the images that are difficult to be distinguished, but may lead to model overfitting for simple images. Therefore, we should selectively use different models to deal with different images, which is similar to the human cognition mechanism, in which different levels of neurons are activated according to the difficulty of object recognition. To this end, we propose a Hierarchical Convolutional Neural Network (HCNN) for image classification in this paper. HCNNs comprise multiple sub-networks, which can be viewed as different levels of neurons in humans, and these sub-networks are used to classify the images progressively. Specifically, we first initialize the weight of each image and each image category, and these images and initial weights are used for training the first sub-network. Then, according to the predicted results of the first sub-network, the weights of misclassified images are increased, while the weights of correctly classified images are decreased. Furthermore, the images with the updated weights are used for training the next sub-networks. Similar operations are performed on all sub-networks. In the test stage, each image passes through the sub-networks in turn. If the prediction confidences in a sub-network are higher than a given threshold, then the results are output directly. Otherwise, deeper visual features need to be learned successively by the subsequent sub-networks until a reliable image classification result is obtained or the last sub-network is reached. Experimental results show that HCNNs can obtain better results than classical CNNs and the existing models based on ensemble learning. HCNNs have 2.68% higher accuracy than Residual Network 50 (Resnet50) on the ultrasonic image dataset, 1.19% than Resnet50 on the chimpanzee facial image dataset, and 10.86% than Adaboost-CNN on the CIFAR-10 dataset. Furthermore, the HCNN is extensible, since the types of sub-networks and their combinations can be dynamically adjusted.


Sign in / Sign up

Export Citation Format

Share Document