appearance changes
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 35)

H-INDEX

11
(FIVE YEARS 3)

2021 ◽  
Vol 13 (22) ◽  
pp. 4618
Author(s):  
Xupei Zhang ◽  
Zhanzhuang He ◽  
Zhong Ma ◽  
Zhongxi Wang ◽  
Li Wang

Local features extraction is a crucial technology for image matching navigation of an unmanned aerial vehicle (UAV), where it aims to accurately and robustly match a real-time image and a geo-referenced image to obtain the position update information of the UAV. However, it is a challenging task due to the inconsistent image capture conditions, which will lead to extreme appearance changes, especially the different imaging principle between an infrared image and RGB image. In addition, the sparsity and labeling complexity of existing public datasets hinder the development of learning-based methods in this research area. This paper proposes a novel learning local features extraction method, which uses local features extracted by deep neural network to find the correspondence features on the satellite RGB reference image and real-time infrared image. First, we propose a single convolution neural network that simultaneously extracts dense local features and their corresponding descriptors. This network combines the advantages of a high repeatability local feature detector and high reliability local feature descriptors to match the reference image and real-time image with extreme appearance changes. Second, to make full use of the sparse dataset, an iterative training scheme is proposed to automatically generate the high-quality corresponding features for algorithm training. During the scheme, the dense correspondences are automatically extracted, and the geometric constraints are added to continuously improve the quality of them. With these improvements, the proposed method achieves state-of-the-art performance for infrared aerial (UAV captured) image and satellite reference image, which shows 4–6% performance improvements in precision, recall, and F1-score, compared to the other methods. Moreover, the applied experiment results show its potential and effectiveness on localization for UAVs navigation and trajectory reconstruction application.


2021 ◽  
Author(s):  
Youngjoo Chae

Abstract Texture is an important synesthetic design element used in textile products. The three-dimensional surface of texture changes the amount and angle of reflected light causing a color appearance change from its original color. In this work, for a wide range of colors, it was quantitatively analyzed how the color appearances change depending on different textures and illumination, such as CIE standard illuminants A, F11, F2, and D65. It was found that strong-textured fabrics (with a surface roughness Ra of 0.46 mm) had larger hue appearance changes and consequent overall color appearance changes from their true colors due to illuminants than non-textured papers (with a surface roughness Ra of 0.03 mm). Between two types of fabrics with different textures of 0.25 and 0.46 mm, however, there was no significant difference in the magnitude of color appearance changes, indicating that the difference in surface roughness greater than 0.43 mm can produce significant differences in color appearance changes induced by illumination. It was also found that the magnitude and direction of color appearance changes under different CIE illuminants differed significantly according to the physical chroma and hue of the surface.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Wanying Chen ◽  
Xiaoyu Zhang ◽  
Yingying Xu ◽  
Zemin Xu ◽  
Haiyan Qin ◽  
...  

Abstract Objectives Our study aimed to explore the clinical therapeutic effects of ultrasound-guided five-point injection of botulinum toxin type A for patients with trapezius hypertrophy. Methods Twenty female patients diagnosed with trapezius hypertrophy were enrolled in this study. The thicknesses of the trapezius muscle were measured by using the ultrasound scanner to locate the thickest point of trapezius, followed by labelling the other four points around the first point. Botulinum toxin type A was injected bilaterally (50 IU/side, 5 points/side) in the trapezius muscle of these patients. The surgery effects were evaluated by thicknesses of the trapezius muscle, intramuscular needle electromyographic and electroneurographic examinations, appearance changes and patients’ satisfactions. Results Statistically significant differences in thicknesses of the trapezius muscle were observed at 4 weeks (p < 0.001), 12 weeks (p < 0.001), 20 weeks (p < 0.001), 28 weeks (p = 0.011), 36 weeks (p = 0.022), and 44 weeks (p = 0.032) after surgery. The latencies of trapezius muscle became longer at 12 weeks after surgery (left: 2.40 ms, right: 2.53 ms vs. left: 1.75 ms, right: 2.00 ms). Electroneurographic results showed amplitude reduction of compound muscle action potentials (CMAPs) at 12 weeks after surgery (left: 1.91 uV, right: 3.10 uV vs. left: 15.00 uV, right: 15.40 uV). Obvious appearance changes were revealed at 12 weeks after surgery. All of 80% patients were very satisfied, 15% patients were relatively satisfied, and 5% patients were not satisfied with the surgery. Conclusion Ultrasound-guided five-point injection of botulinum toxin type A might be effective for patients with trapezius hypertrophy.


2021 ◽  
Vol 11 (20) ◽  
pp. 9540
Author(s):  
Baifan Chen ◽  
Xiaoting Song ◽  
Hongyu Shen ◽  
Tao Lu

A major challenge in place recognition is to be robust against viewpoint changes and appearance changes caused by self and environmental variations. Humans achieve this by recognizing objects and their relationships in the scene under different conditions. Inspired by this, we propose a hierarchical visual place recognition pipeline based on semantic-aggregation and scene understanding for the images. The pipeline contains coarse matching and fine matching. Semantic-aggregation happens in residual aggregation of visual information and semantic information in coarse matching, and semantic association of semantic edges in fine matching. Through the above two processes, we realized a robust coarse-to-fine pipeline of visual place recognition across viewpoint and condition variations. Experimental results on the benchmark datasets show that our method performs better than several state-of-the-art methods, improving the robustness against severe viewpoint changes and appearance changes while maintaining good matching-time performance. Moreover, we prove that it is possible for a computer to realize place recognition based on scene understanding.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2406
Author(s):  
Yesul Park ◽  
L. Minh Dang ◽  
Sujin Lee ◽  
Dongil Han ◽  
Hyeonjoon Moon

Object tracking is a fundamental computer vision problem that refers to a set of methods proposed to precisely track the motion trajectory of an object in a video. Multiple Object Tracking (MOT) is a subclass of object tracking that has received growing interest due to its academic and commercial potential. Although numerous methods have been introduced to cope with this problem, many challenges remain to be solved, such as severe object occlusion and abrupt appearance changes. This paper focuses on giving a thorough review of the evolution of MOT in recent decades, investigating the recent advances in MOT, and showing some potential directions for future work. The primary contributions include: (1) a detailed description of the MOT’s main problems and solutions, (2) a categorization of the previous MOT algorithms into 12 approaches and discussion of the main procedures for each category, (3) a review of the benchmark datasets and standard evaluation methods for evaluating the MOT, (4) a discussion of various MOT challenges and solutions by analyzing the related references, and (5) a summary of the latest MOT technologies and recent MOT trends using the mentioned MOT categories.


2021 ◽  
pp. 17-30
Author(s):  
Richard P. McQuellon

The main theme of this dialogue is Nell’s slow movement toward death and her frustration at the delay. Her physical status is declining and she is visibly deteriorating. Since it is difficult for her to travel by car, we decided to meet in her home. She is frightened by her appearance changes and declares she looks like a beetle, with a bloated body and sticklike appendages. She longs for a witness to her bodily changes and yet is reluctant to ask her spouse Al to look at her. She is disappointed there is nowhere she can find the comfort of someone’s witness to her physical changes. She has met with her medical oncologist and come away frustrated because he has said death is not imminent and yet she is ready. Even so, Nell’s sense of humor is intact and she laughs about completing her income taxes: “Now I can die!” She has had an initial negative encounter with hospice and expresses her concern about their competence. She finds comfort in guided imagery introduced to her by her dear friend Mary, geographically distant but regularly present via phone call.


Author(s):  
Chirawat Wattanapanich ◽  
Hong Wei ◽  
Wijittra Petchkit

A gait recognition framework is proposed to tackle the challenge of unknown camera view angles as well as appearance changes in gait recognition. In the framework, camera view angles are firstly identified before gait recognition. Two compact images, gait energy image (GEI) and gait modified Gaussian image (GMGI), are used as the base gait feature images. Histogram of oriented gradients (HOG) is applied to the base gait feature images to generate feature descriptors, and then a final feature map after principal component analysis (PCA) operations on the descriptors are used to train support vector machine (SVM) models for individuals. A set of experiments are conducted on CASIA gait database B to investigate how appearance changes and unknown view angles affect the gait recognition accuracy under the proposed framework. The experimental results have shown that the framework is robust in dealing with unknown camera view angles, as well as appearance changes in gait recognition. In the unknown view angle testing, the recognition accuracy matches that of identical view angle testing in gait recognition. The proposed framework is specifically applicable in personal identification by gait in a small company/organization, where unintrusive personal identification is needed.


2021 ◽  
Vol 11 (18) ◽  
pp. 8427
Author(s):  
Peiting Gu ◽  
Peizhong Liu ◽  
Jianhua Deng ◽  
Zhi Chen

Discriminative correlation filter (DCF) based tracking algorithms have obtained prominent speed and accuracy strengths, which have attracted extensive attention and research. However, some unavoidable deficiencies still exist. For example, the circulant shifted sampling process is likely to cause repeated periodic assumptions and cause boundary effects, which degrades the tracker’s discriminative performance, and the target is not easy to locate in complex appearance changes. In this paper, a spatial–temporal regularization module based on BACF (background-aware correlation filter) framework is proposed, which is performed by introducing a temporal regularization to deal effectively with the boundary effects issue. At the same time, the accuracy of target recognition is improved. This model can be effectively optimized by employing the alternating direction multiplier (ADMM) method, and each sub-problem has a corresponding closed solution. In addition, in terms of feature representation, we combine traditional hand-crafted features with deep convolution features linearly enhance the discriminative performance of the filter. Considerable experiments on multiple well-known benchmarks show the proposed algorithm is performs favorably against many state-of-the-art trackers and achieves an AUC score of 64.4% on OTB-100.


2021 ◽  
pp. 004051752110395
Author(s):  
Youngjoo Chae

Color has been strategically used to attract consumers in the textile and clothing industry, and yarn color mixing is one of the most typical methods of imparting color to textile products. However, the fact that color appears different depending on the illumination has concerned textile designers and sellers at the point of color communication and sale. This study quantitatively analyzed how the color appearance of woven fabrics composed of single and multiple colors of yarns changes under a broad spectrum of illumination conditions. The lightness, chroma, and hue appearance values of 36 chromatic fabrics, in which red, yellow, green, and blue yarns were woven together, under 16 different illumination conditions were calculated. For the illumination conditions, correlated color temperatures (CCTs) of 2700, 4000, 5000, and 6500 K and luminance of 100, 1000, 4000, and 8000 cd/m2 were employed. The color appearance values of fabrics under the 16 light sources were compared with each other and also with their true physical colors. It was observed that the ranges of the varying lightness, chroma, and hue appearances of fabrics ranged up to 8.49, 16.24, and 27.04, respectively, indicating the huge effect of illumination on color appearance changes. In particular, the lower CCT of light sources induced the larger lightness appearance changes of fabrics from their actual physical colors. It was also found that the magnitudes of the color appearance changes of fabrics induced by light sources differed significantly according to the number of yarn colors and the overall colorimetric properties of the fabrics.


Author(s):  
Ruijing Yang ◽  
Ziyu Guan ◽  
Zitong Yu ◽  
Xiaoyi Feng ◽  
Jinye Peng ◽  
...  

Automatic pain recognition is paramount for medical diagnosis and treatment. The existing works fall into three categories: assessing facial appearance changes, exploiting physiological cues, or fusing them in a multi-modal manner. However, (1) appearance changes are easily affected by subjective factors which impedes objective pain recognition. Besides, the appearance-based approaches ignore long-range spatial-temporal dependencies that are important for modeling expressions over time; (2) the physiological cues are obtained by attaching sensors on human body, which is inconvenient and uncomfortable. In this paper, we present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition. The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended physiological cues (remote photoplethysmography, rPPG) that are recovered from videos in the auxiliary task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network (rSTAN) and allows us to establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases. It demonstrates that rPPG predictions can be used as an auxiliary task to facilitate non-contact automatic pain recognition.


Sign in / Sign up

Export Citation Format

Share Document