image frame
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 29)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 1 (2) ◽  
pp. 235-243
Author(s):  
Jasmine Ulmer

In the process of bringing about the Anthropocene, humanity has become accustomed to taking up a considerable amount of space. This tendency can spill over into how we as humans take up space within our own photographs, too (such as selfies that fill the entirety of the image frame). As contrast, this minimalist photo essay offers alternative visual perspectives through posthuman photography. Alongside earth-tone photographs from the author in Belgium and the Netherlands, captions illustrate how we can refocus, rescale, and reframe everyday photographs that position (post)humanity within the contexts of our planet and the epoch in which we live. In photo essay form, text and images show how us how we can decelerate and refocus the Anthropocenic gaze.  


2021 ◽  
Vol 14 (1) ◽  
pp. 21
Author(s):  
Anna I. Christie ◽  
Andrew P. Colefax ◽  
Daniele Cagnazzi

Analysis of animal morphometrics can provide vital information regarding population dynamics, structure, and body condition of cetaceans. Unmanned aerial vehicles (UAVs) have become the primary tool to collect morphometric measurements on whales, whereas on free ranging small dolphins, have not yet been applied. This study assesses the feasibility of obtaining reliable body morphometrics from Australian snubfin (Orcaella heinsohni) and humpback dolphins (Sousa sahulensis) using images collected from UAVs. Specifically, using a dolphin replica of known size, we tested the effect of the altitude of the UAV and the position of the animal within the image frame on the accuracy of length estimates. Using linear mixed models, we further assessed the precision of the total length estimates of humpback and snubfin dolphins. The precision of length estimates on the replica increased by ~2% when images were sampled at 45–60 m compared with 15–30 m. However, the precision of total length estimates on dolphins was significantly influenced only by the degree of arch and edge certainty. Overall, we obtained total length estimates with a precision of ~3% and consistent with published data. This study demonstrates the reliability of using UAV based images to obtain morphometrics of small dolphin species, such as snubfin and humpback dolphins.


2021 ◽  
Author(s):  
Eduardo A. Gonzalez ◽  
Fabrizio Assis ◽  
Jonathan Chrispin ◽  
Muyinatu A. Lediju Bell

Author(s):  
Pisit NAKJAI ◽  
Tatpong KATANYUKUL

This article explores a transcription of a video recording Thai Finger Spelling (TFS)—a specific signing mode used in Thai sign language—to a corresponding Thai word. TFS copes with 42 Thai alphabets and 20 vowels using multiple and complex schemes. This leads to many technical challenges uncommon in spelling schemes of other sign languages. Our proposed system, Automatic Thai Finger Spelling Transcription (ATFS), processes a signing video in 3 stages: ALS marking video frames to easily remove any non-signing frame as well as conveniently group frames associating to the same alphabet, SR classifying a signing image frame to a sign label (or its equivalence), and SSR transcribing a series of signs into alphabets. ALS utilizes the TFS practice of signing different alphabets at different locations. SR and SSC employ well-adopted spatial and sequential models. Our ATFS has been found to achieve Alphabet Error Rate (AER) 0.256 (c.f. 0.63 of the baseline method). In addition to ATFS, our findings have disclosed a benefit of coupling image classification and sequence modeling stages by using a feature or penultimate vector for label representation rather than a definitive label or one-hot coding. Our results also assert the necessity of a smoothening mechanism in ALS and reveal a benefit of our proposed WFS, which could lead to over 15.88 % improvement. For TFS transcription, our work emphasizes the utilization of signing location in the identification of different alphabets. This is contrary to a common belief of exploiting signing time duration, which are shown to be ineffective by our data. HIGHLIGHTS Prototype of Thai finger spelling transcription (transcribing a signing video to alphabets) Utilization of signing location as cue for identification of different alphabets Disclosure of a benefit of coupling image classification and sequence modeling in signing transcription Examination of various frame smoothing techniques and their contributions to the overall transcription performance GRAPHICAL ABSTRACT


2021 ◽  
Author(s):  
Yijun Yao ◽  
Shaowu Liu ◽  
Marie-Pierre Planche ◽  
Sihao Deng ◽  
Hanlin Liao

Abstract In thermal spray processes, the characteristics of in-flight particles (velocity and temperature) have a significant effect on coating performance. Although many imaging systems and algorithms have been developed for identifying and tracking in-flight particles, most are limited in terms of accuracy. One key to solving the tracking problem is to get an algorithm that can distinguish different particles in each image frame. As the study showed, when noise and interference are treated, particles are more readily identified in the background, leading to more accurate size and position measurements with respect to time. This approach is demonstrated and the results discussed.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chunxia Duan

The effect is tested in various specific scenes of sports videos to complete the multitarget motion multitarget tracking detection application applicable to various specific scenes within sports videos. In this paper, deep neural networks are applied to sports video multitarget motion shadow suppression and accurate tracking to improve tracking performance. After the target frame selection is determined, the tracker uses an optical flow method to estimate the limits of the target sports video multitarget motion based on the sports video multitarget motion of the target object between frames. The detector first scans each sports video image frame one by one, observing the previously discovered and learned image frame subregions one by one until the current moment that is highly like the target to be tracked. The preprocessed remote sensing images are converted into grayscale images, the histogram is normalized, and the appropriate height threshold is selected in combination with the regional growth function to realize the rejection of sports video multitarget motion shadow and establish the sports video multitarget network model. The distance and direction of the precise target displacement are determined by frequency-domain vectors and null domain vectors, and the target action judgment mechanism is formed by decision learning. Finally, comparing with other shadow rejection and precision tracking algorithms, the proposed algorithm achieves greater advantages in terms of accuracy and time consumption.


2021 ◽  
Vol 45 (3) ◽  
pp. 418-426
Author(s):  
M. Toscani ◽  
S. Martínez

The SUPPOSe enhanced deconvolution algorithm relies in assuming that the image source can be described by an incoherent superposition of virtual point sources of equal intensity and finding the number and position of such virtual sources. In this work we describe the recent advances in the implementation of the method to gain resolution and remove artifacts due to the presence of fluorescent molecules close enough to the image frame boundary. The method was modified removing the invariant used before given by the product of the flux of the virtual sources times the number of virtual sources, and replacing it by a new invariant given by the total flux within the frame, thus allowing the location of virtual sources outside the frame but contributing to the signal inside the frame.


2021 ◽  
Vol 13 (11) ◽  
pp. 2158
Author(s):  
Dong-Seok Lee ◽  
Jae-Hun Jo

The mean radiant temperature (MRT) is an indicator for evaluating the radiant heat environment near occupants and is determined by the radiant heat exchange between the occupants and their surroundings. To control various heating and cooling systems according to the occupants’ thermal comfort, it is essential to consider MRTs in the real-time evaluation of thermal environment. This study proposes a pan–tilt infrared (IR) scanning method to estimate the MRTs at multiple occupant locations in real buildings. The angle factor was calculated by defining the specific classification criteria for dividing the entire indoor surface into sub-surfaces. The coupling IR camera and pan–tilt motor were applied to enable storing data pairs of IR thermal image frame (IR image frame) and pan–tilt angle so each surface area taken by the IR camera can have its direction information. The measurement method of the mean surface temperature using the pan–tilt IR system was presented. The pan–tilt IR system hardware and MRT monitoring software were developed. An experiment was performed to verify the applicability of the proposed pan–tilt IR scanning method. By comparing the surface temperatures measured using a contact thermometer and the proposed IR system, the contact thermometer could cause inaccurate measurement of surfaces with a non-uniform distribution of temperature. The difference between surface temperatures increased by up to 15 °C and, accordingly, the MRT distributions differed by up to 6 °C within the same space. The proposed IR scanning method showed good applicability in various aspects. This paper reports that the MRT has a significant effect on the occupants’ thermal comfort and also suggests considering MRTs in the real-time evaluation of thermal environment to control various heating and cooling systems appropriately.


2021 ◽  
Vol 11 (2) ◽  
pp. 6986-6992
Author(s):  
L. Poomhiran ◽  
P. Meesad ◽  
S. Nuanmeesri

This paper proposes a lip reading method based on convolutional neural networks applied to Concatenated Three Sequence Keyframe Image (C3-SKI), consisting of (a) the Start-Lip Image (SLI), (b) the Middle-Lip Image (MLI), and (c) the End-Lip Image (ELI) which is the end of the pronunciation of that syllable. The lip area’s image dimensions were reduced to 32×32 pixels per image frame and three keyframes concatenate together were used to represent one syllable with a dimension of 96×32 pixels for visual speech recognition. Every three concatenated keyframes representing any syllable are selected based on the relative maximum and relative minimum related to the open lip’s width and height. The evaluation results of the model’s effectiveness, showed accuracy, validation accuracy, loss, and validation loss values at 95.06%, 86.03%, 4.61%, and 9.04% respectively, for the THDigits dataset. The C3-SKI technique was also applied to the AVDigits dataset, showing 85.62% accuracy. In conclusion, the C3-SKI technique could be applied to perform lip reading recognition.


Author(s):  
Igor Abrahão Arantes ◽  
Bernardo Giorni Abijaude Bracarense ◽  
Samuel Marinho Leidner ◽  
Humberto de Campos Rezende ◽  
João Victor Boechat Gomide

This paper presents the workflow for developing an animated short film, Fio, using real-time rendering. Real-time rendering is a recently commercially available alternative and has drastically reduced rendering time.Rendering means graphically processing an image, or layers of images, so that it can be displayed after processing, immediately. It is a critical step in computer graphics and determines the quality of the final image and the time required for it to be available for display. Previously, rendering required many minutes for each image frame, with multiple and costly processors. Currently, one render per frame is achieved in less than a second, with an equivalent look and a smaller amount of processing.To render in real time, the graphics engine Unreal Engine, a pioneer in this type of solution, was used. For this type of use, to render scenes, the software is free. Software of relevance in the creative industry were chosen as well, such as Autodesk Maya for animations and modeling, zBrush for sculptures, Substance Painter and Photoshop for composing textures, Audition for soundtracks, Premiere for editing. The engine itself has tools for lighting and composition. Other experimental alternatives were tested in the workflow, such as the use of hand-drawn textures with pastel chalks.This paper will introduce each step of the workflow for producing an animated short film, with the preparation of the scene adjustments to be imported by the Unreal engine at the end. The first minute of this animation is accessible in https://vimeo.com/503536788.


Sign in / Sign up

Export Citation Format

Share Document