Using Language to Learn Structured Appearance Models for Image Annotation

2010 ◽  
Vol 32 (1) ◽  
pp. 148-164 ◽  
Author(s):  
M. Jamieson ◽  
A. Fazly ◽  
S. Stevenson ◽  
S. Dickinson ◽  
S. Wachsmuth
2019 ◽  
Vol 2019 (1) ◽  
pp. 320-325 ◽  
Author(s):  
Wenyu Bao ◽  
Minchen Wei

Great efforts have been made to develop color appearance models to predict color appearance of stimuli under various viewing conditions. CIECAM02, the most widely used color appearance model, and many other color appearance models were all developed based on corresponding color datasets, including LUTCHI data. Though the effect of adapting light level on color appearance, which is known as "Hunt Effect", is well known, most of the corresponding color datasets were collected within a limited range of light levels (i.e., below 700 cd/m2), which was much lower than that under daylight. A recent study investigating color preference of an artwork under various light levels from 20 to 15000 lx suggested that the existing color appearance models may not accurately characterize the color appearance of stimuli under extremely high light levels, based on the assumption that the same preference judgements were due to the same color appearance. This article reports a psychophysical study, which was designed to directly collect corresponding colors under two light levels— 100 and 3000 cd/m2 (i.e., ≈ 314 and 9420 lx). Human observers completed haploscopic color matching for four color stimuli (i.e., red, green, blue, and yellow) under the two light levels at 2700 or 6500 K. Though the Hunt Effect was supported by the results, CIECAM02 was found to have large errors under the extremely high light levels, especially when the CCT was low.


2013 ◽  
Vol 39 (10) ◽  
pp. 1674
Author(s):  
Dong YANG ◽  
Xiu-Ling ZHOU ◽  
Ping GUO

2021 ◽  
Vol 11 (6) ◽  
pp. 522
Author(s):  
Feng-Yu Liu ◽  
Chih-Chi Chen ◽  
Chi-Tung Cheng ◽  
Cheng-Ta Wu ◽  
Chih-Po Hsu ◽  
...  

Automated detection of the region of interest (ROI) is a critical step in the two-step classification system in several medical image applications. However, key information such as model parameter selection, image annotation rules, and ROI confidence score are essential but usually not reported. In this study, we proposed a practical framework of ROI detection by analyzing hip joints seen on 7399 anteroposterior pelvic radiographs (PXR) from three diverse sources. We presented a deep learning-based ROI detection framework utilizing a single-shot multi-box detector with a customized head structure based on the characteristics of the obtained datasets. Our method achieved average intersection over union (IoU) = 0.8115, average confidence = 0.9812, and average precision with threshold IoU = 0.5 (AP50) = 0.9901 in the independent testing set, suggesting that the detected hip regions appropriately covered the main features of the hip joints. The proposed approach featured flexible loose-fitting labeling, customized model design, and heterogeneous data testing. We demonstrated the feasibility of training a robust hip region detector for PXRs. This practical framework has a promising potential for a wide range of medical image applications.


2021 ◽  
Vol 11 (13) ◽  
pp. 5931
Author(s):  
Ji’an You ◽  
Zhaozheng Hu ◽  
Chao Peng ◽  
Zhiqiang Wang

Large amounts of high-quality image data are the basis and premise of the high accuracy detection of objects in the field of convolutional neural networks (CNN). It is challenging to collect various high-quality ship image data based on the marine environment. A novel method based on CNN is proposed to generate a large number of high-quality ship images to address this. We obtained ship images with different perspectives and different sizes by adjusting the ships’ postures and sizes in three-dimensional (3D) simulation software, then 3D ship data were transformed into 2D ship image according to the principle of pinhole imaging. We selected specific experimental scenes as background images, and the target ships of the 2D ship images were superimposed onto the background images to generate “Simulation–Real” ship images (named SRS images hereafter). Additionally, an image annotation method based on SRS images was designed. Finally, the target detection algorithm based on CNN was used to train and test the generated SRS images. The proposed method is suitable for generating a large number of high-quality ship image samples and annotation data of corresponding ship images quickly to significantly improve the accuracy of ship detection. The annotation method proposed is superior to the annotation methods that label images with the image annotation software of Label-me and Label-img in terms of labeling the SRS images.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


Sign in / Sign up

Export Citation Format

Share Document