visual artifacts
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 57)

H-INDEX

10
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 73
Author(s):  
Marjan Stoimchev ◽  
Marija Ivanovska ◽  
Vitomir Štruc

In the past few years, there has been a leap from traditional palmprint recognition methodologies, which use handcrafted features, to deep-learning approaches that are able to automatically learn feature representations from the input data. However, the information that is extracted from such deep-learning models typically corresponds to the global image appearance, where only the most discriminative cues from the input image are considered. This characteristic is especially problematic when data is acquired in unconstrained settings, as in the case of contactless palmprint recognition systems, where visual artifacts caused by elastic deformations of the palmar surface are typically present in spatially local parts of the captured images. In this study we address the problem of elastic deformations by introducing a new approach to contactless palmprint recognition based on a novel CNN model, designed as a two-path architecture, where one path processes the input in a holistic manner, while the second path extracts local information from smaller image patches sampled from the input image. As elastic deformations can be assumed to most significantly affect the global appearance, while having a lesser impact on spatially local image areas, the local processing path addresses the issues related to elastic deformations thereby supplementing the information from the global processing path. The model is trained with a learning objective that combines the Additive Angular Margin (ArcFace) Loss and the well-known center loss. By using the proposed model design, the discriminative power of the learned image representation is significantly enhanced compared to standard holistic models, which, as we show in the experimental section, leads to state-of-the-art performance for contactless palmprint recognition. Our approach is tested on two publicly available contactless palmprint datasets—namely, IITD and CASIA—and is demonstrated to perform favorably against state-of-the-art methods from the literature. The source code for the proposed model is made publicly available.


2021 ◽  
pp. 162-170
Author(s):  
Tran Dang Khoa Phan

In this paper, we present an image denoising algorithm comprising three stages. In the first stage, Principal Component Analysis (PCA) is used to suppress the noise. PCA is applied to image blocks to characterize localized features and rare image patches. In the second stage, we use the Gaussian curvature to develop an adaptive total-variation-based (TV) denoising model to effectively remove visual artifacts and noise residual generated by the first stage. Finally, the denoised image is sharpened in order to enhance the contrast of the denoising result. Experimental results on natural images and computed tomography (CT) images demonstrated that the proposed algorithm yields denoising results better than competing algorithms in terms of both qualitative and quantitative aspects.


Author(s):  
Connie Blomgren

The examination of teacher educators’ own practices through self-study research has been well established and self-study aligns with the growing interest in open educational resources (OER) and open pedagogy. This research used a self-study method of a Science, Technology, Engineering, Art, and Mathematics (STEAM) OER project, Form and Function(s): Sustainable Design meets Computational Thinking. Two research questions were pursued: How do open pedagogy attributes contribute to a transdisciplinary STEAM OER pedagogical stance? And how can one apply visual artifact self-study as intentional critical friends to examine professional value and to enhance pedagogical self-understanding? The researcher analyzed visual artifacts of created and documented images that supported the process of her interrogations of transdisciplinary curriculum development and open pedagogy. The sites and modalities of the artifacts were questioned and answers recorded using a critical visual methodology. Klein’s (2008, 2018) transdisciplinary thinking and the eight attributes of Hegarty’s (2015) open pedagogy frame the interrogation of the images and the connections made to curriculum theorizing. The self-study provides conclusions to the role of visual artifacts when conceptualizing the gestalt of complex ideas and relations. The self-study provides warranted assertions for open educators and researchers interested in the practices of transdisciplinary, open curricular and pedagogical processes alongside the eight attributes of open pedagogy, and the role of critical self-reflection.


2021 ◽  
Author(s):  
Alice Comi ◽  
Eero Vaara

Previous research on knowledge work has started to explore how organizational actors deal with pragmatic boundaries that arise from their different interests, priorities, and viewpoints. Material objects, such as visual artifacts, can be used to shape and manipulate pragmatic boundaries, but our understanding of these dynamics is only partial. In this paper, we maintain that focusing on the uses of visual artifacts offers an opportunity to deepen our understanding of the political aspects of knowledge work. To this end, we conducted a practice-based study of an architectural project in which the building design became contested. Our empirical analysis reveals four practices in which visual artifacts are used to deal with pragmatic boundaries: surfacing, bridging, preventing, and minimizing. Through these practices, organizational actors can make boundaries more or less visible with important implications on their power relations and the project at hand. The main contribution of our study is to advance understanding of the political dynamics in knowledge work by revealing how visual artifacts can be used to manipulate pragmatic boundaries. By so doing, our analysis also helps to move the conversation on visual artifacts beyond their role as epistemic objects that sustain (or hinder) knowledge work.


2021 ◽  
pp. 17-30
Author(s):  
Tuuli Lähdesmäki ◽  
Jūratė Baranova ◽  
Susanne C. Ylönen ◽  
Aino-Kaisa Koistinen ◽  
Katja Mäkinen ◽  
...  

AbstractThis chapter locates the book within the research on children’s art. It explores interpretations of children’s visual creations throughout the twentieth century and situates the approach of the book within the research landscape. The authors take developmental psychological, educational, and aesthetic approaches to form a sociocultural view of children’s art, challenging many of the previous research assumptions. Through adopting the paradigm of the sociocultural approach, the authors embrace its view of children as competent cultural actors and active participants in cultural production. Thus, the discussion focuses on meaning-making: the authors analyze visual artifacts made by students to understand how they engage with the idea of the difference.


2021 ◽  
Vol 2021 (29) ◽  
pp. 7-12
Author(s):  
Hoang Le ◽  
Taehong Jeong ◽  
Abdelrahman Abdelhamed ◽  
Hyun Joon Shin ◽  
Michael S. Brown

Most cameras still encode images in the small-gamut sRGB color space. The reliance on sRGB is disappointing as modern display hardware and image-editing software are capable of using wider-gamut color spaces. Converting a small-gamut image to a wider-gamut is a challenging problem. Many devices and software use colorimetric strategies that map colors from the small gamut to their equivalent colors in the wider gamut. This colorimetric approach avoids visual changes in the image but leaves much of the target wide-gamut space unused. Noncolorimetric approaches stretch or expand the small-gamut colors to enhance image colors while risking color distortions. We take a unique approach to gamut expansion by treating it as a restoration problem. A key insight used in our approach is that cameras internally encode images in a wide-gamut color space (i.e., ProPhoto) before compressing and clipping the colors to sRGB's smaller gamut. Based on this insight, we use a softwarebased camera ISP to generate a dataset of 5,000 image pairs of images encoded in both sRGB and ProPhoto. This dataset enables us to train a neural network to perform wide-gamut color restoration. Our deep-learning strategy achieves significant improvements over existing solutions and produces color-rich images with few to no visual artifacts.


Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1824
Author(s):  
Pedro Albuquerque ◽  
João Pedro Machado ◽  
Tanmay Tulsidas Verlekar ◽  
Paulo Lobato Correia ◽  
Luís Ducla Soares

Several pathologies can alter the way people walk, i.e., their gait. Gait analysis can be used to detect such alterations and, therefore, help diagnose certain pathologies or assess people’s health and recovery. Simple vision-based systems have a considerable potential in this area, as they allow the capture of gait in unconstrained environments, such as at home or in a clinic, while the required computations can be done remotely. State-of-the-art vision-based systems for gait analysis use deep learning strategies, thus requiring a large amount of data for training. However, to the best of our knowledge, the largest publicly available pathological gait dataset contains only 10 subjects, simulating five types of gait. This paper presents a new dataset, GAIT-IT, captured from 21 subjects simulating five types of gait, at two severity levels. The dataset is recorded in a professional studio, making the sequences free of background camouflage, variations in illumination and other visual artifacts. The dataset is used to train a novel automatic gait analysis system. Compared to the state-of-the-art, the proposed system achieves a drastic reduction in the number of trainable parameters, memory requirements and execution times, while the classification accuracy is on par with the state-of-the-art. Recognizing the importance of remote healthcare, the proposed automatic gait analysis system is integrated with a prototype web application. This prototype is presently hosted in a private network, and after further tests and development it will allow people to upload a video of them walking and execute a web service that classifies their gait. The web application has a user-friendly interface usable by healthcare professionals or by laypersons. The application also makes an association between the identified type of gait and potential gait pathologies that exhibit the identified characteristics.


Communication ◽  
2021 ◽  

Visual rhetoric is a relatively new area of study that emerged in the late 1900s when rhetoric scholars recognized the increasing centrality of the visual in contemporary culture. There is no consensus on the definition of visual rhetoric; different scholars use the term in different ways. Broadly, it refers to the analysis of the communicative and persuasive power of visual artifacts. These artifacts range from two-dimensional images such as photographs, political cartoons, and maps to moving images in film or television. They also include three-dimensional objects like murals, as well as places, spaces, and bodies. Although much scholarship on visual rhetoric focuses on the communicative aspects of visuals, there are also a number of studies that examine cultural practices of looking and interpreting. While visual rhetoric borrows from various methods and disciplines that also concern themselves with the visual, such as semiotics, aesthetics, and cultural studies, this bibliography focuses narrowly on the branch of study that emerged from US rhetorical studies within the discipline of communication in the 1970s. This bibliography begins with pieces that hail from other disciplines in order to recognize their influence in thinking about the rhetorical dimensions of visuals. From there, it moves to suggest general overviews and anthologies of this area of study, as well as some methods to evaluate images. Finally, the bibliography focuses on different forms of visual rhetoric that range from photographs to bodies.


2021 ◽  
Vol 11 (13) ◽  
pp. 5813
Author(s):  
Helard Becerra Martinez ◽  
Andrew Hines ◽  
Mylène C. Q. Farias

Audio-visual quality assessment remains as a complex research field. A great effort is being made to understand how visual and auditory domains are integrated and processed by humans. In this work, we analyzed and compared the results of three psychophisical experiments that collected quality and content scores given by a pool of subjects. The experiments include diverse content audio-visual material, e.g., Sports, TV Commercials, Interviews, Music, Documentaries and Cartoons, impaired with several visual (bitrate compression, packet-loss, and frame-freezing) and auditory (background noise, echo, clip, chop) distortions. Each experiment explores a particular domain. In Experiment 1, the video component was degraded with visual artifacts, meanwhile, the audio component did not suffer any type of degradation. In Experiment 2, the audio component was degraded while the video component remained untouched. Finally, in Experiment 3 both audio and video components were degraded. As expected, results confirmed a dominance of the visual component in the overall audio-visual quality. However, a detailed analysis showed that, for certain types of audio distortions, the audio component played a more important role in the construction of the overall perceived quality.


2021 ◽  
Vol 23 (06) ◽  
pp. 1025-1032
Author(s):  
Karthik Karthik ◽  
◽  
Vinay Varma B ◽  
Akshay Narayan Pai ◽  
◽  
...  

Interlacing is a commonly used technique for doubling the perceived frame rate without adding bandwidth in television broadcasting and video recording. During playback, however, it exhibits disturbing visual artifacts such as flickering and combing. As a result in modern display devices, video deinterlacing is used where the interlaced video format is converted to progressive scan format to overcome the limitations of interlaced video. This conversion is achieved through interpolating interlaced video. Current deinterlacing approaches either neglect temporal information for real-time performance but poor visual quality, or estimate motion for better deinterlacing but higher computational cost. This paper focuses on surveying the deinterlacing algorithms which apply both spatial and temporal-based methods and focus on different aspects of both motion-adaptive, non-motion adaptive, and the time complexity through these implementations.


Sign in / Sign up

Export Citation Format

Share Document