scholarly journals INTERPRETATION OF HISTORIC STRUCTURE FOR NON-INVASIVE ASSESSMENT USING EYE TRACKING

Author(s):  
M. R. Saleem ◽  
A. Straus ◽  
R. Napolitano

Abstract. With the aims of ensuring safety and decreasing maintenance costs, previous studies in bridge inspection research have worked to elucidate damage indicators and understand their correspondence to structural deficiency. During this process, understanding how an inspector looks at a structure comprehensively as well as how they localize on damage is vital to examining diagnostic bias and how it can play a role in the preservation and maintenance process. To understand human perception and assess the humaninfrastructure interaction during the feature extraction process, eye tracking can be useful. Eye tracking data can accurately map where a human is looking and what they are focusing on based on metrics such as fixation, saccade, pupil dilation, and scan path. The present research highlights the use of eye tracking metrics for recognizing and inferring human implicit attention and intention while performing a structural inspection. These metrics will be used to learn the behavior of human eyes and how detection tasks can change a person’s overall behavior. A preliminary study has been carried out for damage detection to analyze key features that are important for understanding human-infrastructure interaction during damage assessment. These eye tracking features will lay the foundation for human intent prediction and how an inspector performs inspection on historic structures for existing types of damage. In future, the results of this work will be used to train a machine learning agent for autonomous and reactive decision making.

2021 ◽  
Vol 2120 (1) ◽  
pp. 012030
Author(s):  
J K Tan ◽  
W J Chew ◽  
S K Phang

Abstract The field of Human-Computer Interaction (HCI) has been developing tremendously since the past decade. The existence of smartphones or modern computers is already a norm in society these days which utilizes touch, voice and typing as a means for input. To further increase the variety of interaction, human eyes are set to be a good candidate for another form of HCI. The amount of information which the human eyes contain are extremely useful, hence, various methods and algorithm for eye gaze tracking are implemented in multiple sectors. However, some eye-tracking method requires infrared rays to be projected into the eye of the user which could potentially cause enzyme denaturation when the eye is subjected to those rays under extreme exposure. Therefore, to avoid potential harm from the eye-tracking method that utilizes infrared rays, this paper proposes an image-based eye tracking system using the Viola-Jones algorithm and Circular Hough Transform (CHT) algorithm. The proposed method uses visible light instead of infrared rays to control the mouse pointer using the eye gaze of the user. This research aims to implement the proposed algorithm for people with hand disability to interact with computers using their eye gaze.


2017 ◽  
Vol 6 (2) ◽  
pp. 166-173
Author(s):  
Arsy Febrina Dewi ◽  
Fitri Arnia ◽  
Rusdha Muharar

Clothing is a human used to cover the body. Clothing consist of dress, pants, skirts, and others. Clothing usually consists of various colors or a combination of several colors. Colors become one of the important reference used by humans in determining or looking for clothing according to their wishes. Color is one of the features that fit the human vision. Content Based Image Retrieval (CBIR) is a technique in Image Retrieval that give index to an image based on the characteristics contained in image such as color, shape, and texture. CBIR can make it easier to find something because it helps the grouping process on image based on its characteristic. In this case CBIR is used for the searching process of Muslim fashion based on the color features. The color used in this research is the color descriptor MPEG-7 which is Scalable Color Descriptor (SCD) and Dominant Color Descriptor (DCD). The SCD color feature displays the overall color proportion of the image, while the DCD displays the most dominant color in the image. For each image of Muslim women's clothing, the extraction process utilize SCD and DCD. This study used 150 images of Muslim women's clothing as a dataset consistingclass of red, blue, yellow, green and brown. Each class consists of 30 images. The similarity between the image features is measured using the eucludian distance. This study used human perception in viewing the color of clothing.The effectiveness is calculated for the color features of SCD and DCD adjusted to the human subjective similarity. Based on the simulation of effectiveness DCD result system gives higher value than SCD.


Heritage ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 1160-1165 ◽  
Author(s):  
Alexey Tikhonov

One of the most important aspects of the long-term digital-image preservation strategy is maintaining data fixity, i.e., assuring the integrity and authenticity of original data. This article aims to highlight the limitations of the approaches used to maintain the fixity of digital images in the digital preservation process and to offer perceptual hashing as a way to alleviate some of the limitations of current approaches, as well as discuss some non-technical implications of the described problems. This paper is exploratory, and while it includes a simple experiment description, it only outlines the problem and testing environment for a possible solution that could be elaborated on in further research. The most commonly used fixity maintaining techniques are immutability of data and file checksums/cryptographic hashes. On the other hand, planning for long-term preservation necessitates the need to migrate data into new future formats to maintain availability and sustainability, and the concept of the file itself should not be assumed to remain forever, which calls for other tools to ascertain the fixity of digital images. The problem goes beyond one that is exclusively technical: bitstream content is not ready for human perception, and the digital preservation strategy should include all the necessary technical steps to assure the availability of stored images to human eyes. This shifts the perspective on what should be considered the digital image in digital preservation. It is not the file, but a perceptible object, or, more specifically—instructions to create one. Therefore, it calls for additional tools to maintain fixity, such as perceptual hashing, transformation logging, and others.


Author(s):  
Dominik Szajerman ◽  
Piotr Napieralski ◽  
Jean-Philippe Lecointe

Purpose Technological innovation has made it possible to review how a film cues particular reactions on the part of the viewers. The purpose of this paper is to capture and interpret visual perception and attention by the simultaneous use of eye tracking and electroencephalography (EEG) technologies. Design/methodology/approach The authors have developed a method for joint analysis of EEG and eye tracking. To achieve this goal, an algorithm was implemented to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. All parameters have been measured as a function of the relationship between the tested signals, which, in turn, allowed for a more accurate validation of hypotheses by appropriately selected calculations. Findings The results of this study revealed a coherence between EEG and eye tracking that are of particular relevance for human perception. Practical implications This paper endeavors both to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. Eye tracking provides a powerful real-time measure of viewer region of interest. EEG technologies provides data regarding the viewer’s emotional states while watching the movie. Originality/value The approach in this paper is distinct from similar studies because it takes into account the integration of the eye tracking and EEG technologies. This paper provides a method for determining a fully functional video introspection system.


Author(s):  
Burton B. Silver

Tissue from a non-functional kidney affected with chronic membranous glomerulosclerosis was removed at time of trnasplantation. Recipient kidney tissue and donor kidney tissue were simultaneously fixed for electron microscopy. Primary fixation was in phosphate buffered gluteraldehyde followed by infiltration in 20 and then 40% glycerol. The tissues were frozen in liquid Freon and finally in liquid nitrogen. Fracturing and replication of the etched surface was carried out in a Denton freeze-etch device. The etched surface was coated with platinum followed by carbon. These replicas were cleaned in a 50% solution of sodium hypochlorite and mounted on 400 mesh copper grids. They were examined in an Siemens Elmiskop IA. The pictures suggested that the diseased kidney had heavy deposits of an unknown substance which might account for its inoperative state at the time of surgery. Such deposits were not as apparent in light microscopy or in the standard fixation methods used for EM. This might have been due to some extraction process which removed such granular material in the dehydration steps.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


2017 ◽  
Vol 131 (1) ◽  
pp. 19-29 ◽  
Author(s):  
Marianne T. E. Heberlein ◽  
Dennis C. Turner ◽  
Marta B. Manser

Sign in / Sign up

Export Citation Format

Share Document