facial image
Recently Published Documents


TOTAL DOCUMENTS

754
(FIVE YEARS 212)

H-INDEX

28
(FIVE YEARS 6)

2022 ◽  
Author(s):  
David Moss

Convolutional neural networks (CNNs), inspired by biological visual cortex systems, are a powerful category of artificial neural networks that can extract the hierarchical features of raw data to greatly reduce the network parametric complexity and enhance the predicting accuracy. They are of significant interest for machine learning tasks such as computer vision, speech recognition, playing board games and medical diagnosis [1-7]. Optical neural networks offer the promise of dramatically accelerating computing speed to overcome the inherent bandwidth bottleneck of electronics. Here, we demonstrate a universal optical vector convolutional accelerator operating beyond 10 Tera-OPS (TOPS - operations per second), generating convolutions of images of 250,000 pixels with 8-bit resolution for 10 kernels simultaneously — enough for facial image recognition. We then use the same hardware to sequentially form a deep optical CNN with ten output neurons, achieving successful recognition of full 10 digits with 900 pixel handwritten digit images with 88% accuracy. Our results are based on simultaneously interleaving temporal, wavelength and spatial dimensions enabled by an integrated microcomb source. We show that this approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicle and real-time video recognition.Keywords: Optical neural networks, neuromorphic processor, microcomb, convolutional accelerator


Iproceedings ◽  
10.2196/35431 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e35431
Author(s):  
Hyeon Ki Jeong ◽  
Christine Park ◽  
Ricardo Henao ◽  
Meenal Kheterpal

Background In the era of increasing tools for automatic image analysis in dermatology, new machine learning models require high-quality image data sets. Facial image data are needed for developing models to evaluate attributes such as redness (acne and rosacea models), texture (wrinkles and aging models), pigmentation (melasma, seborrheic keratoses, aging, and postinflammatory hyperpigmentation), and skin lesions. Deidentifying facial images is critical for protecting patient anonymity. Traditionally, journals have required facial feature concealment typically covering the eyes, but these guidelines are largely insufficient to meet ethical and legal guidelines of the Health Insurance Portability and Accountability Act for patient privacy. Currently, facial feature deidentification is a challenging task given lack of expert consensus and lack of testing infrastructure for adequate automatic and manual facial image detection. Objective This study aimed to review the current literature on automatic facial deidentification algorithms and to assess their utility in dermatology use cases, defined by preservation of skin attributes (redness, texture, pigmentation, and lesions) and data utility. Methods We conducted a systematic search using a combination of headings and keywords to encompass the concepts of facial deidentification and privacy preservation. The MEDLINE (via PubMed), Embase (via Elsevier), and Web of Science (via Clarivate) databases were queried from inception to May 1, 2021. Studies with the incorrect design and outcomes were excluded during the screening and review process. Results A total of 18 studies, largely focusing on general adversarial network (GANs), were included in the final review reporting various methodologies of facial deidentification algorithms for still and video images. GAN-based studies were included owing to the algorithm’s capacity to generate high-quality, realistic images. Study methods were rated individually for their utility for use cases in dermatology, pertaining to skin color or pigmentation and texture preservation, data utility, and human detection, by 3 human reviewers. We found that most studies notable in the literature address facial feature and expression preservation while sacrificing skin color, texture, pigmentation, which are critical features in dermatology-related data utility. Conclusions Overall, facial deidentification algorithms have made notable advances such as disentanglement and face swapping techniques, while producing realistic faces for protecting privacy. However, they are sparse and currently not suitable for complete preservation of skin texture, color, and pigmentation quality in facial photographs. Using the current advances in artificial intelligence for facial deidentification summarized herein, a novel approach is needed to ensure greater patient anonymity, while increasing data access for automated image analysis in dermatology. Conflicts of Interest None declared.


2021 ◽  
Author(s):  
Hyeon Ki Jeong ◽  
Christine Park ◽  
Ricardo Henao ◽  
Meenal Kheterpal

BACKGROUND In the era of increasing tools for automatic image analysis in dermatology, new machine learning models require high-quality image data sets. Facial image data are needed for developing models to evaluate attributes such as redness (acne and rosacea models), texture (wrinkles and aging models), pigmentation (melasma, seborrheic keratoses, aging, and postinflammatory hyperpigmentation), and skin lesions. Deidentifying facial images is critical for protecting patient anonymity. Traditionally, journals have required facial feature concealment typically covering the eyes, but these guidelines are largely insufficient to meet ethical and legal guidelines of the Health Insurance Portability and Accountability Act for patient privacy. Currently, facial feature deidentification is a challenging task given lack of expert consensus and lack of testing infrastructure for adequate automatic and manual facial image detection. OBJECTIVE This study aimed to review the current literature on automatic facial deidentification algorithms and to assess their utility in dermatology use cases, defined by preservation of skin attributes (redness, texture, pigmentation, and lesions) and data utility. METHODS We conducted a systematic search using a combination of headings and keywords to encompass the concepts of facial deidentification and privacy preservation. The MEDLINE (via PubMed), Embase (via Elsevier), and Web of Science (via Clarivate) databases were queried from inception to May 1, 2021. Studies with the incorrect design and outcomes were excluded during the screening and review process. RESULTS A total of 18 studies, largely focusing on general adversarial network (GANs), were included in the final review reporting various methodologies of facial deidentification algorithms for still and video images. GAN-based studies were included owing to the algorithm’s capacity to generate high-quality, realistic images. Study methods were rated individually for their utility for use cases in dermatology, pertaining to skin color or pigmentation and texture preservation, data utility, and human detection, by 3 human reviewers. We found that most studies notable in the literature address facial feature and expression preservation while sacrificing skin color, texture, pigmentation, which are critical features in dermatology-related data utility. CONCLUSIONS Overall, facial deidentification algorithms have made notable advances such as disentanglement and face swapping techniques, while producing realistic faces for protecting privacy. However, they are sparse and currently not suitable for complete preservation of skin texture, color, and pigmentation quality in facial photographs. Using the current advances in artificial intelligence for facial deidentification summarized herein, a novel approach is needed to ensure greater patient anonymity, while increasing data access for automated image analysis in dermatology.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7923
Author(s):  
Dae-Yeol Kim ◽  
Kwangkee Lee ◽  
Chae-Bong Sohn

In general, facial image-based remote photoplethysmography (rPPG) methods use color-based and patch-based region-of-interest (ROI) selection methods to estimate the blood volume pulse (BVP) and beats per minute (BPM). Anatomically, the thickness of the skin is not uniform in all areas of the face, so the same diffuse reflection information cannot be obtained in each area. In recent years, various studies have presented experimental results for their ROIs but did not provide a valid rationale for the proposed regions. In this paper, to see the effect of skin thickness on the accuracy of the rPPG algorithm, we conducted an experiment on 39 anatomically divided facial regions. Experiments were performed with seven algorithms (CHROM, GREEN, ICA, PBV, POS, SSR, and LGI) using the UBFC-rPPG and LGI-PPGI datasets considering 29 selected regions and two adjusted regions out of 39 anatomically classified regions. We proposed a BVP similarity evaluation metric to find a region with high accuracy. We conducted additional experiments on the TOP-5 regions and BOT-5 regions and presented the validity of the proposed ROIs. The TOP-5 regions showed relatively high accuracy compared to the previous algorithm’s ROI, suggesting that the anatomical characteristics of the ROI should be considered when developing a facial image-based rPPG algorithm.


2021 ◽  
Vol 7 ◽  
pp. e760
Author(s):  
Shih-Kai Hung ◽  
John Q. Gan

Image data collection and labelling is costly or difficult in many real applications. Generating diverse and controllable images using conditional generative adversarial networks (GANs) for data augmentation from a small dataset is promising but challenging as deep convolutional neural networks need a large training dataset to achieve reasonable performance in general. However, unlabeled and incomplete features (e.g., unintegral edges, simplified lines, hand-drawn sketches, discontinuous geometry shapes, etc.) can be conveniently obtained through pre-processing the training images and can be used for image data augmentation. This paper proposes a conditional GAN framework for facial image augmentation using a very small training dataset and incomplete or modified edge features as conditional input for diversity. The proposed method defines a new domain or space for refining interim images to prevent overfitting caused by using a very small training dataset and enhance the tolerance of distortions caused by incomplete edge features, which effectively improves the quality of facial image augmentation with diversity. Experimental results have shown that the proposed method can generate high-quality images of good diversity when the GANs are trained using very sparse edges and a small number of training samples. Compared to the state-of-the-art edge-to-image translation methods that directly convert sparse edges to images, when using a small training dataset, the proposed conditional GAN framework can generate facial images with desirable diversity and acceptable distortions for dataset augmentation and significantly outperform the existing methods in terms of the quality of synthesised images, evaluated by Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) scores.


2021 ◽  
Author(s):  
Chien-Hung Lin ◽  
Yi-Lun Pan ◽  
Ja-Ling Wu

BMJ Open ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. e047549
Author(s):  
Zhaohui Su ◽  
Bin Liang ◽  
Feng Shi ◽  
J Gelfond ◽  
Sabina Šegalo ◽  
...  

IntroductionDeep learning techniques are gaining momentum in medical research. Evidence shows that deep learning has advantages over humans in image identification and classification, such as facial image analysis in detecting people’s medical conditions. While positive findings are available, little is known about the state-of-the-art of deep learning-based facial image analysis in the medical context. For the consideration of patients’ welfare and the development of the practice, a timely understanding of the challenges and opportunities faced by research on deep-learning-based facial image analysis is needed. To address this gap, we aim to conduct a systematic review to identify the characteristics and effects of deep learning-based facial image analysis in medical research. Insights gained from this systematic review will provide a much-needed understanding of the characteristics, challenges, as well as opportunities in deep learning-based facial image analysis applied in the contexts of disease detection, diagnosis and prognosis.MethodsDatabases including PubMed, PsycINFO, CINAHL, IEEEXplore and Scopus will be searched for relevant studies published in English in September, 2021. Titles, abstracts and full-text articles will be screened to identify eligible articles. A manual search of the reference lists of the included articles will also be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was adopted to guide the systematic review process. Two reviewers will independently examine the citations and select studies for inclusion. Discrepancies will be resolved by group discussions till a consensus is reached. Data will be extracted based on the research objective and selection criteria adopted in this study.Ethics and disseminationAs the study is a protocol for a systematic review, ethical approval is not required. The study findings will be disseminated via peer-reviewed publications and conference presentations.PROSPERO registration numberCRD42020196473.


Sign in / Sign up

Export Citation Format

Share Document