scholarly journals Evaluation of Automated Measurement of Hair Density Using Deep Neural Networks

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 650
Author(s):  
Minki Kim ◽  
Sunwon Kang ◽  
Byoung-Dai Lee

Recently, deep learning has been employed in medical image analysis for several clinical imaging methods, such as X-ray, computed tomography, magnetic resonance imaging, and pathological tissue imaging, and excellent performance has been reported. With the development of these methods, deep learning technologies have rapidly evolved in the healthcare industry related to hair loss. Hair density measurement (HDM) is a process used for detecting the severity of hair loss by counting the number of hairs present in the occipital donor region for transplantation. HDM is a typical object detection and classification problem that could benefit from deep learning. This study analyzed the accuracy of HDM by applying deep learning technology for object detection and reports the feasibility of automating HDM. The dataset for training and evaluation comprised 4492 enlarged hair scalp RGB images obtained from male hair-loss patients and the corresponding annotation data that contained the location information of the hair follicles present in the image and follicle-type information according to the number of hairs. EfficientDet, YOLOv4, and DetectoRS were used as object detection algorithms for performance comparison. The experimental results indicated that YOLOv4 had the best performance, with a mean average precision of 58.67.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Author(s):  
Riichi Kudo ◽  
Kahoko Takahashi ◽  
Takeru Inoue ◽  
Kohei Mizuno

Abstract Various smart connected devices are emerging like automated driving cars, autonomous robots, and remote-controlled construction vehicles. These devices have vision systems to conduct their operations without collision. Machine vision technology is becoming more accessible to perceive self-position and/or the surrounding environment thanks to the great advances in deep learning technologies. The accurate perception information of these smart connected devices makes it possible to predict wireless link quality (LQ). This paper proposes an LQ prediction scheme that applies machine learning to HD camera output to forecast the influence of surrounding mobile objects on LQ. The proposed scheme utilizes object detection based on deep learning and learns the relationship between the detected object position information and the LQ. Outdoor experiments show that LQ prediction proposal can well predict the throughput for around 1 s into the future in a 5.6-GHz wireless LAN channel.


2020 ◽  
Vol 10 (14) ◽  
pp. 4744
Author(s):  
Hyukzae Lee ◽  
Jonghee Kim ◽  
Chanho Jung ◽  
Yongchan Park ◽  
Woong Park ◽  
...  

The arena fragmentation test (AFT) is one of the tests used to design an effective warhead. Conventionally, complex and expensive measuring equipment is used for testing a warhead and measuring important factors such as the size, velocity, and the spatial distribution of fragments where the fragments penetrate steel target plates. In this paper, instead of using specific sensors and equipment, we proposed the use of a deep learning-based object detection algorithm to detect fragments in the AFT. To this end, we acquired many high-speed videos and built an AFT image dataset with bounding boxes of warhead fragments. Our method fine-tuned an existing object detection network named the Faster R-convolutional neural network (CNN) on this dataset with modification of the network’s anchor boxes. We also employed a novel temporal filtering method, which was demonstrated as an effective non-fragment filtering scheme in our recent previous image processing-based fragment detection approach, to capture only the first penetrating fragments from all detected fragments. We showed that the performance of the proposed method was comparable to that of a sensor-based system under the same experimental conditions. We also demonstrated that the use of deep learning technologies in the task of AFT significantly enhanced the performance via a quantitative comparison between our proposed method and our recent previous image processing-based method. In other words, our proposed method outperformed the previous image processing-based method. The proposed method produced outstanding results in terms of finding the exact fragment positions.


2019 ◽  
Vol 35 (4) ◽  
pp. 328-337
Author(s):  
Byeongseop Kim ◽  
Seunghyeok Son ◽  
Cheolho Ryu ◽  
Jong Gye Shin

Curved hull plate forming, the process of forming a flat plate into a curved surface that can fit into the outer shell of a ship’s hull, can be achieved through either cold or thermal forming processes, with the latter processes further subcategorizable into line or triangle heating. The appropriate forming process is determined from the plate shape and surface classification, which must be determined in advance to establish a precise production plan. In this study, an algorithm to extract two-dimensional features of constant size from three-dimensional design information was developed to enable the application of machine and deep learning technologies to hull plates with arbitrary polygonal shapes. Several candidate classifiers were implemented by applying learning algorithms to datasets comprising calculated features and labels corresponding to various hull plate types, with the performance of each classifier evaluated using cross-validation. A classifier applying a convolution neural network as a deep learning technology was found to have the highest prediction accuracy, which exceeded the accuracies obtained in previous hull plate classification studies. The results of this study demonstrate that it is possible to automatically classify hull plates with high accuracy using deep learning technologies and that a perfect level of classification accuracy can be approached by obtaining further plate data.


2020 ◽  
Vol 6 (3) ◽  
pp. 27-32
Author(s):  
Artur S. Ter-Levonian ◽  
Konstantin A. Koshechkin

Introduction: Nowadays an increase in the amount of information creates the need to replace and update data processing technologies. One of the tasks of clinical pharmacology is to create the right combination of drugs for the treatment of a particular disease. It takes months and even years to create a treatment regimen. Using machine learning (in silico) allows predicting how to get the right combination of drugs and skip the experimental steps in a study that take a lot of time and financial expenses. Gradual preparation is needed for the Deep Learning of Drug Synergy, starting from creating a base of drugs, their characteristics and ways of interacting. Aim: Our review aims to draw attention to the prospect of the introduction of Deep Learning technology to predict possible combinations of drugs for the treatment of various diseases. Materials and methods: Literary review of articles based on the PUBMED project and related bibliographic resources over the past 5 years (2015–2019). Results and discussion: In the analyzed articles, Machine or Deep Learning completed the assigned tasks. It was able to determine the most appropriate combinations for the treatment of certain diseases, select the necessary regimen and doses. In addition, using this technology, new combinations have been identified that may be further involved in preclinical studies. Conclusions: From the analysis of the articles, we obtained evidence of the positive effects of Deep Learning to select “key” combinations for further stages of preclinical research.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Anja Vujovic ◽  
Véronique Del Marmol

Female pattern hair loss (FPHL) is the most common hair loss disorder in women. Initial signs may develop during teenage years leading to a progressive hair loss with a characteristic pattern distribution. The condition is characterized by progressive replacement of terminal hair follicles over the frontal and vertex regions by miniaturized follicles, that leads progressively to a visible reduction in hair density. Women diagnosed with FPHL may undergo significant impairment of quality of life. FPHL diagnosis is mostly clinical. Depending on patient history and clinical evaluation, further diagnostic testing may be useful. The purpose of the paper is to review the current knowledge about epidemiology, pathogenesis, clinical manifestations, and diagnosis of FPHL.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Biyun Yang ◽  
Yong Xu

AbstractDeep learning is known as a promising multifunctional tool for processing images and other big data. By assimilating large amounts of heterogeneous data, deep-learning technology provides reliable prediction results for complex and uncertain phenomena. Recently, it has been increasingly used by horticultural researchers to make sense of the large datasets produced during planting and postharvest processes. In this paper, we provided a brief introduction to deep-learning approaches and reviewed 71 recent research works in which deep-learning technologies were applied in the horticultural domain for variety recognition, yield estimation, quality detection, stress phenotyping detection, growth monitoring, and other tasks. We described in detail the application scenarios reported in the relevant literature, along with the applied models and frameworks, the used data, and the overall performance results. Finally, we discussed the current challenges and future trends of deep learning in horticultural research. The aim of this review is to assist researchers and provide guidance for them to fully understand the strengths and possible weaknesses when applying deep learning in horticultural sectors. We also hope that this review will encourage researchers to explore some significant examples of deep learning in horticultural science and will promote the advancement of intelligent horticulture.


Author(s):  
Rajeshvaree Ravindra Karmarkar ◽  
Prof.V.N Honmane

—As object recognition technology has developed recently, various technologies have been applied to autonomousvehicles, robots, and industrial facilities. However, the benefits ofthese technologies are not reaching the visually impaired, who need it the most. This paper proposed an object detection system for the blind using deep learning technologies. Furthermore, a voice guidance technique is used to inform sight impaired persons as to the location of objects. The object recognition deep learning model utilizes the You Only Look Once(YOLO) algorithm and a voice announcement is synthesized using text-tospeech (TTS) to make it easier for the blind to get information about objects. Asa result, it implements an efficient object-detection system that helps the blind find objects in a specific space without help from others, and the system is analyzed through experiments to verify performance.


2018 ◽  
Vol 2 (3) ◽  
pp. 47 ◽  
Author(s):  
Mihalj Bakator ◽  
Dragica Radosav

In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1387
Author(s):  
Ming-Fong Tsai ◽  
Pei-Ching Lin ◽  
Zi-Hao Huang ◽  
Cheng-Hsun Lin

Image identification, machine learning and deep learning technologies have been applied in various fields. However, the application of image identification currently focuses on object detection and identification in order to determine a single momentary picture. This paper not only proposes multiple feature dependency detection to identify key parts of pets (mouth and tail) but also combines the meaning of the pet’s bark (growl and cry) to identify the pet’s mood and state. Therefore, it is necessary to consider changes of pet hair and ages. To this end, we add an automatic optimization identification module subsystem to respond to changes of pet hair and ages in real time. After successfully identifying images of featured parts each time, our system captures images of the identified featured parts and stores them as effective samples for subsequent training and improving the identification ability of the system. When the identification result is transmitted to the owner each time, the owner can get the current mood and state of the pet in real time. According to the experimental results, our system can use a faster R-CNN model to improve 27.47%, 68.17% and 26.23% accuracy of traditional image identification in the mood of happy, angry and sad respectively.


Sign in / Sign up

Export Citation Format

Share Document