scholarly journals Mobile Remote Photoplethysmography: A Real-Time Realization Perspective

Author(s):  
Hooseok Lee ◽  
Hoon Ko ◽  
Heewon Chung ◽  
Yunyoung Nam ◽  
Sangjin Hong ◽  
...  

Abstract Remote photoplethysmography (rPPG) sensors have attracted a significant amount of attention as they enable the remote monitoring of instantaneous heart rates (HRs) and thus do not require any additional devices to be worn on fingers or wrists. In this study, we mounted rPPG sensors on a robot for active and autonomous instantaneous HR (R-AAIH) estimation. Subsequently, we proposed the algorithm providing accurate instantaneous HRs, which can be performed in real time with vision and robot manipulation algorithms. By simplifying the extraction of facial skin images using saturation (S) values in the HSV color space, and selecting pixels based on the most frequent S value on the face image, we achieved reliable HR assessment. The results of the proposed algorithm using the R-AAIH were evaluated by rigorous comparison with the results of existing algorithms on the UBFC-RPPG dataset (n = 42). Our algorithm exhibited an average absolute error (AAE) of 0.71 beats per minute (bpm). The developed algorithm is simple and the processing time is less than 1 s (275 ms for an 8-s window). The algorithm was further validated on our own dataset (BAMI-RPPG dataset [n = 14]) with an AAE of 0.82 bpm.

2021 ◽  
Vol 22 (1) ◽  
pp. 68-77
Author(s):  
Izzati Muhimmah ◽  
Nurul Fatikah Muchlis ◽  
Arrie Kurniawardhani

One facial skin problem is redness. On site examination currently relies on examination through direct observations conducted by doctors and the patient's medical history. However, some patients are reluctant to consult with a doctor because of shame or prohibitive costs. This study attempts to utilize digital image processing algorithms to analyze the patient's facial skin condition automatically, especially redness detection in the face image. The method used for detecting red objects on face skin for this research is Redness method. The output of the Redness method will be optimized by feature selection based on area, mean intensity of the RGB color space, and mean intensity of the Hue Intensity. The dataset used in this research consists of 35 facial images. The sensitivity, specificity, and accuracy are used to measure the detection performance. The performance achieved 54%, 99.1%, and 96.2% for sensitivity, specificity, and accuracy, respectively, according to dermatologists. Meanwhile, according to PT. AVO personnel, the performance achieved 67.4%, 99.1%, and 97.7%, for sensitivity, specificity, and accuracy, respectively. Based on the result, the system is good enough to detect redness in facial images. ABSTRAK: Salah satu masalah kulit wajah adalah kemerahan muka. Pemeriksaan di lokasi kini bergantung pada pemeriksaan melalui pemerhatian langsung yang dilakukan oleh doktor dan sejarah perubatan pesakit. Namun, sebilangan pesakit enggan berunding dengan doktor kerana rasa malu atau kos yang terhad. Kajian ini cuba membuat sistem pengesanan kemerahan wajah yang dapat menganalisis keadaan wajah, terutama kemerahan, melalui gambar kulit wajah. Kaedah yang digunakan untuk mengesan objek merah pada kulit wajah bagi penyelidikan ini adalah kaedah Kemerahan. Keluaran kaedah Kemerahan akan dioptimumkan dengan pemilihan ciri berdasarkan luas, intensiti min RGB, dan intensiti min Hue Intensity. Set data yang digunakan dalam penyelidikan ini terdiri daripada 35 gambar wajah. Nilai pengesahan yang digunakan adalah kepekaan, kekhususan, dan ketepatan. Hasil yang diperoleh berdasarkan pakar dermatologi masing-masing adalah 54%, 99.1%, dan 96.2% untuk kepekaan, kekhususan, dan ketepatan. Sementara itu, PT. Selain itu, menurut kakitangan AVO 67.4%, 99.1%, dan 97.7%, bagi kepekaan, kekhususan, dan ketepatan, masing-masing. Berdasarkan dapatan kajian ini, sistem ini cukup baik bagi mengesan kemerahan pada gambar wajah.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 190
Author(s):  
Zuodong Niu ◽  
Handong Li ◽  
Yao Li ◽  
Yingjie Mei ◽  
Jing Yang

Face image inpainting technology is an important research direction in image restoration. When the current image restoration methods repair the damaged areas of face images with weak texture, there are problems such as low accuracy of face image decomposition, unreasonable restoration structure, and degradation of image quality after inpainting. Therefore, this paper proposes an adaptive face image inpainting algorithm based on feature symmetry. Firstly, we locate the feature points of the face, and segment the face into four feature parts based on the feature point distribution to define the feature search range. Then, we construct a new mathematical model, introduce feature symmetry to improve priority calculation, and increase the reliability of priority calculation. After that, in the process of searching for matching blocks, we accurately locate similar feature blocks according to the relative position and symmetry criteria of the target block and various feature parts of the face. Finally, we introduced the HSV (Hue, Saturation, Value) color space to determine the best matching block according to the chroma and brightness of the sample, reduce the repair error, and complete the face image inpainting. During the experiment, we firstly performed visual evaluation and texture analysis on the inpainting face image, and the results show that the face image inpainting by our algorithm maintained the consistency of the face structure, and the visual observation was closer to the real face features. Then, we used the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as objective evaluation indicators; among the five sample face images inpainting results given in this paper, our method was better than the reference methods, and the average PSNR value improved from 2.881–5.776 dB using our method when inpainting 100 face images. Additionally, we used the time required for inpainting the unit pixel to evaluate the inpainting efficiency, and it was improved by 12%–49% with our method when inpainting 100 face images. Finally, by comparing the face image inpainting experiments with the generative adversary network (GAN) algorithm, we discuss some of the problems with the method in this paper based on graphics in repairing face images with large areas of missing features.


2019 ◽  
pp. 618-1626
Author(s):  
Alya'a R. Ali ◽  
Ban N. Dhannoon

Faces blurring is one of the important complex processes that is considered one of the advanced computer vision fields. The face blurring processes generally have two main steps to be done. The first step has detected the faces that appear in the frames while the second step is tracking the detected faces which based on the information extracted during the detection step. In the proposed method, an image is captured by the camera in real time, then the Viola Jones algorithm used for the purpose of detecting multiple faces in the captured image and for the purpose of reducing the time consumed to handle the entire captured image, the image background is removed and only the motion areas are processed. After detecting the faces, the Color-Space algorithm is used to tracks the detected faces depending on the color of the face and to check the differences between the faces the Template Matching algorithm was used to reduce the processes time. Finally, thedetected faces as well as the faces that were tracked based on their color were obscured by the use of the Gaussian filter. The achieved accuracy for a single face and dynamic background are about 82.8% and 76.3% respectively.


Author(s):  
Samir Bandyopadhyay ◽  
Payel Bose

Human Face and facial parts are the most significant parts as it reveals a person’s true identity. It plays an important role in various biometric applications like crowd analysis, human tracking, photography, cosmetic surgery, etc. There are many techniques are available to detect a facial image. Among them, skin detection is the most popular one. The aim of this paper is to detect first the person's identity from facial image and finally check any spot present the the detected person. The first step is to detect the maximum skin region based on a combination method of RGB and HSV color space model. Next it is to verify the skin areas of human through machine learning approach. The Aggregated Channel Features (ACF) detector is used to identify the different facial parts like eye pairs, nose, and mouth. Bootstrap aggregation decision tree classifier is applied to classify the person’s identity based on Histogram Oriented Gradient (HOG) features value. The experimental results show that the proposed method gives the average 97% accuracy.


Signals ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 540-558
Author(s):  
Keiichiro Shirai ◽  
Tatsuya Baba ◽  
Shunsuke Ono ◽  
Masahiro Okuda ◽  
Yusuke Tatesumi ◽  
...  

This paper proposes an automatic image correction method for portrait photographs, which promotes consistency of facial skin color by suppressing skin color changes due to background colors. In portrait photographs, skin color is often distorted due to the lighting environment (e.g., light reflected from a colored background wall and over-exposure by a camera strobe). This color distortion is emphasized when artificially synthesized with another background color, and the appearance becomes unnatural. In our framework, we, first, roughly extract the face region and rectify the skin color distribution in a color space. Then, we perform color and brightness correction around the face in the original image to achieve a proper color balance of the facial image, which is not affected by luminance and background colors. Our color correction process attains natural results by using a guide image, unlike conventional algorithms. In particular, our guided image filtering for the color correction does not require a perfectly-aligned guide image required in the original guide image filtering method proposed by He et al. Experimental results show that our method generates more natural results than conventional methods on not only headshot photographs but also natural scene photographs. We also show automatic yearbook style photo generation as another application.


Author(s):  
PEICHUNG SHIH ◽  
CHENGJUN LIU

Content-based face image retrieval is concerned with computer retrieval of face images (of a given subject) based on the geometric or statistical features automatically derived from these images. It is well known that color spaces provide powerful information for image indexing and retrieval by means of color invariants, color histogram, color texture, etc. This paper assesses comparatively the performance of content-based face image retrieval in different color spaces using a standard algorithm, the Principal Component Analysis (PCA), which has become a popular algorithm in the face recognition community. In particular, we comparatively assess 12 color spaces (RGB, HSV, YUV, YCbCr, XYZ, YIQ, L*a*b*, U*V*W*, L*u*v*, I1I2I3, HSI, and rgb) by evaluating seven color configurations for every single color space. A color configuration is defined by an individual or a combination of color component images. Take the RGB color space as an example, possible color configurations are R, G, B, RG, RB, GB and RGB. Experimental results using 600 FERET color images corresponding to 200 subjects and 456 FRGC (Face Recognition Grand Challenge) color images of 152 subjects show that some color configurations, such as YV in the YUV color space and YI in the YIQ color space, help improve face retrieval performance.


Author(s):  
Sheikh Summerah

Abstract: This study presents a strategy to automate the process to recognize and track objects using color and motion. Video Tracking is the approach to detect a moving item using a camera across the long distance. The basic goal of video tracking is in successive video frames to link target objects. When objects move quicker in proportion to frame rate, the connection might be particularly difficult. This work develops a method to follow moving objects in real-time utilizing HSV color space values and OpenCV in distinct video frames.. We start by deriving the HSV value of an object to be tracked and then in the testing stage, track the object. It was seen that the objects were tracked with 90% accuracy. Keywords: HSV, OpenCV, Object tracking,


2014 ◽  
Vol 19 (2-3) ◽  
pp. 45-49 ◽  
Author(s):  
Piotr Pawlik ◽  
Zbigniew Bubliński ◽  
Andrzej Głowacz

Abstract The aim of this work was to develop an algorithm for estimating the waiting time of the cars stopped before the intersection in a traffic flow measurement system (based on optical flow), which does not require the generation of the background and allows to calculation in real time. The proposed method performs analysis in HSV color space - a mask generated from S component is applied to H component. In this way a background - an asphalt and horizontal whitespace - is eliminated. The result of this operation is combined with data from optical flow to detect the vehicles which should be tracked.


Sign in / Sign up

Export Citation Format

Share Document