object detection
Recently Published Documents


TOTAL DOCUMENTS

11813
(FIVE YEARS 6041)

H-INDEX

120
(FIVE YEARS 41)

2022 ◽  
Vol 20 (4) ◽  
pp. 677-685
Author(s):  
Rosa Gonzales-Martinez ◽  
Javier Machacuay ◽  
Pedro Rotta ◽  
Cesar Chinguel

2022 ◽  
Vol 9 (2) ◽  
pp. 87-93
Author(s):  
Muhammed Enes ATİK ◽  
Zaide DURAN ◽  
Roni ÖZGÜNLÜK

Author(s):  
Ali YAHYAOUY ◽  
Abdelouahed Sabri ◽  
Fadwa BENJELLOUN ◽  
Imane EL MANAA ◽  
Abdellah AARAB

2022 ◽  
Vol 13 (1) ◽  
pp. 1-11
Author(s):  
Shih-Chia Huang ◽  
Quoc-Viet Hoang ◽  
Da-Wei Jaw

Despite the recent improvement of object detection techniques, many of them fail to detect objects in low-luminance images. The blurry and dimmed nature of low-luminance images results in the extraction of vague features and failure to detect objects. In addition, many existing object detection methods are based on models trained on both sufficient- and low-luminance images, which also negatively affect the feature extraction process and detection results. In this article, we propose a framework called Self-adaptive Feature Transformation Network (SFT-Net) to effectively detect objects in low-luminance conditions. The proposed SFT-Net consists of the following three modules: (1) feature transformation module, (2) self-adaptive module, and (3) object detection module. The purpose of the feature transformation module is to enhance the extracted feature through unsupervisely learning a feature domain projection procedure. The self-adaptive module is utilized as a probabilistic module producing appropriate features either from the transformed or the original features to further boost the performance and generalization ability of the proposed framework. Finally, the object detection module is designed to accurately detect objects in both low- and sufficient- luminance images by using the appropriate features produced by the self-adaptive module. The experimental results demonstrate that the proposed SFT-Net framework significantly outperforms the state-of-the-art object detection techniques, achieving an average precision (AP) of up to 6.35 and 11.89 higher on the sufficient- and low- luminance domain, respectively.


2022 ◽  
Vol 73 ◽  
pp. 102229 ◽  
Author(s):  
Zhengxue Zhou ◽  
Leihui Li ◽  
Alexander Fürsterling ◽  
Hjalte Joshua Durocher ◽  
Jesper Mouridsen ◽  
...  

2022 ◽  
Vol 246 ◽  
pp. 110587
Author(s):  
Min-Chul Kong ◽  
Myung-Il Roh ◽  
Ki-Su Kim ◽  
Jeongyoul Lee ◽  
Jongoh Kim ◽  
...  

2022 ◽  
Vol 122 ◽  
pp. 108258
Author(s):  
Shuyu Miao ◽  
Shanshan Du ◽  
Rui Feng ◽  
Yuejie Zhang ◽  
Huayu Li ◽  
...  
Keyword(s):  

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 650
Author(s):  
Minki Kim ◽  
Sunwon Kang ◽  
Byoung-Dai Lee

Recently, deep learning has been employed in medical image analysis for several clinical imaging methods, such as X-ray, computed tomography, magnetic resonance imaging, and pathological tissue imaging, and excellent performance has been reported. With the development of these methods, deep learning technologies have rapidly evolved in the healthcare industry related to hair loss. Hair density measurement (HDM) is a process used for detecting the severity of hair loss by counting the number of hairs present in the occipital donor region for transplantation. HDM is a typical object detection and classification problem that could benefit from deep learning. This study analyzed the accuracy of HDM by applying deep learning technology for object detection and reports the feasibility of automating HDM. The dataset for training and evaluation comprised 4492 enlarged hair scalp RGB images obtained from male hair-loss patients and the corresponding annotation data that contained the location information of the hair follicles present in the image and follicle-type information according to the number of hairs. EfficientDet, YOLOv4, and DetectoRS were used as object detection algorithms for performance comparison. The experimental results indicated that YOLOv4 had the best performance, with a mean average precision of 58.67.


Sign in / Sign up

Export Citation Format

Share Document