Image Transformation: Inductive Transfer between Multiple Tasks Having Multiple Outputs

Author(s):  
Daniel L. Silver ◽  
Liangliang Tu
2021 ◽  
pp. 174702182110087
Author(s):  
Lauren Aulet ◽  
Sami R Yousif ◽  
Stella Lourenco

Multiple tasks have been used to demonstrate the relation between numbers and space. The classic interpretation of these directional spatial-numerical associations (d-SNAs) is that they are the product of a mental number line (MNL), in which numerical magnitude is intrinsically associated with spatial position. The alternative account is that d-SNAs reflect task demands, such as explicit numerical judgments and/or categorical responses. In the novel ‘Where was The Number?’ task, no explicit numerical judgments were made. Participants were simply required to reproduce the location of a numeral within a rectangular space. Using a between-subject design, we found that numbers, but not letters, biased participants’ responses along the horizontal dimension, such that larger numbers were placed more rightward than smaller numbers, even when participants completed a concurrent verbal working memory task. These findings are consistent with the MNL account, such that numbers specifically are inherently left-to-right oriented in Western participants.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 717
Author(s):  
Mariia Nazarkevych ◽  
Natalia Kryvinska ◽  
Yaroslav Voznyi

This article presents a new method of image filtering based on a new kind of image processing transformation, particularly the wavelet-Ateb–Gabor transformation, that is a wider basis for Gabor functions. Ateb functions are symmetric functions. The developed type of filtering makes it possible to perform image transformation and to obtain better biometric image recognition results than traditional filters allow. These results are possible due to the construction of various forms and sizes of the curves of the developed functions. Further, the wavelet transformation of Gabor filtering is investigated, and the time spent by the system on the operation is substantiated. The filtration is based on the images taken from NIST Special Database 302, that is publicly available. The reliability of the proposed method of wavelet-Ateb–Gabor filtering is proved by calculating and comparing the values of peak signal-to-noise ratio (PSNR) and mean square error (MSE) between two biometric images, one of which is filtered by the developed filtration method, and the other by the Gabor filter. The time characteristics of this filtering process are studied as well.


2021 ◽  
Vol 11 (1) ◽  
pp. 363
Author(s):  
Juan Jesús Roldán-Gómez ◽  
Eduardo González-Gironda ◽  
Antonio Barrientos

Forest firefighting missions encompass multiple tasks related to prevention, surveillance, and extinguishing. This work presents a complete survey of firefighters on the current problems in their work and the potential technological solutions. Additionally, it reviews the efforts performed by the academy and industry to apply different types of robots in the context of firefighting missions. Finally, all this information is used to propose a concept of operation for the comprehensive application of drone swarms in firefighting. The proposed system is a fleet of quadcopters that individually are only able to visit waypoints and use payloads, but collectively can perform tasks of surveillance, mapping, monitoring, etc. Three operator roles are defined, each one with different access to information and functions in the mission: mission commander, team leaders, and team members. These operators take advantage of virtual and augmented reality interfaces to intuitively get the information of the scenario and, in the case of the mission commander, control the drone swarm.


2021 ◽  
pp. 147715352110026
Author(s):  
Y Mao ◽  
S Fotios

Obstacle detection and facial emotion recognition are two critical visual tasks for pedestrians. In previous studies, the effect of changes in lighting was tested for these as individual tasks, where the task to be performed next in a sequence was known. In natural situations, a pedestrian is required to attend to multiple tasks, perhaps simultaneously, or at least does not know which of several possible tasks would next require their attention. This multi-tasking might impair performance on any one task and affect evaluation of optimal lighting conditions. In two experiments, obstacle detection and facial emotion recognition tasks were performed in parallel under different illuminances. Comparison of these results with previous studies, where these same tasks were performed individually, suggests that multi-tasking impaired performance on the peripheral detection task but not the on-axis facial emotion recognition task.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Sign in / Sign up

Export Citation Format

Share Document