Context Dependent Automatic Textile Image Annotation Using Networked Knowledge

Author(s):  
Yosuke Furukawa ◽  
◽  
Yusuke Kamoi ◽  
Tatsuya Sato ◽  
Tomohiro Takagi

This paper presents a new method of an automatic image annotation system that estimates keywords from an image. Typical automatic image annotation systems extract features from an image and recognize keywords. However this method has two problems. One is that it treats features statically. Features should change depending on what keywords are attached so keywords should not be treated equally. Another is that it does not consider the level of keywords. Visual keywords, such as color or texture, can be recognized easily from image features, while high-level semantics such as context are hard to recognize from the features. To solve these problems, our approach is to recognize context by using networked specialist knowledge and to recognize keywords by changing feature values dynamically depending on the context. To evaluate our system, we conducted two experiments of applying it to textile images. As a result, we obtained improved accuracy and confirmed the effectiveness of using networked knowledge.

2020 ◽  
Vol 4 (3) ◽  
pp. 815
Author(s):  
Muhammad Aditya Rayhan ◽  
Kemas Muslim Lhaksmana

Mass running event has gained popularity ever since recreational running becomes more common as they often held annually by various organizers. As image documentation took a huge part to showcase the event, many thousands of images were generated during the event. Along with thousands of images that were generated, the participant is unlikely to found an image of themselves. To solve this problem, image annotation could be performed to address image with specific tags such as participant attribute like racing bib number (RBN). Manually annotate thousands of images would result in inefficiency of time and hard-labor. As a work to tackle this problem, this paper proposed an automatic image annotation system using the YOLOv3 algorithm based RBN recognition method. The experiment result shows 83.0% precision, 81.5% recall, and 82.2% F1 score as a result of our proposed method on running event dataset. Therefore, this implemented method will promote efficiency to solve the image annotation problem because it doesn't require manual annotation over thousand of running event images


Sign in / Sign up

Export Citation Format

Share Document