Sartre's Nature: Animal Images in La Nausée

1977 ◽  
Vol 31 (2) ◽  
pp. 107-125
Author(s):  
Catharine Savage Brosman
Keyword(s):  
2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


Computation ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 35
Author(s):  
Hind R. Mohammed ◽  
Zahir M. Hussain

Accurate, fast, and automatic detection and classification of animal images is challenging, but it is much needed for many real-life applications. This paper presents a hybrid model of Mamdani Type-2 fuzzy rules and convolutional neural networks (CNNs) applied to identify and distinguish various animals using different datasets consisting of about 27,307 images. The proposed system utilizes fuzzy rules to detect the image and then apply the CNN model for the object’s predicate category. The CNN model was trained and tested based on more than 21,846 pictures of animals. The experiments’ results of the proposed method offered high speed and efficiency, which could be a prominent aspect in designing image-processing systems based on Type 2 fuzzy rules characterization for identifying fixed and moving images. The proposed fuzzy method obtained an accuracy rate for identifying and recognizing moving objects of 98% and a mean square error of 0.1183464 less than other studies. It also achieved a very high rate of correctly predicting malicious objects equal to recall = 0.98121 and a precision rate of 1. The test’s accuracy was evaluated using the F1 Score, which obtained a high percentage of 0.99052.


Author(s):  
Weina Zhu ◽  
Jan Drewes ◽  
Nicholas A. Peatfield ◽  
David Melcher

2020 ◽  
Vol 4 (6) ◽  
pp. 119-126
Author(s):  
Omongul Kenjabayevna Khalibekova ◽  

Background. Relevance is due to a certain level of undevelopment of many issues related to the human factor in the language. The development of this problematic seems promising for identifying the national and cultural characteristics of English and Uzbek phraseological units, which allows us to increase our vocabulary and, therefore, enrich our speech.This article examines the semantic-pragmatic and connotatively evaluative relations of phraseological units based on animal images in English and Uzbek languages. Zoonyms imply textual roles within a specific discourse and difference in values, stereotypes and behaviour patterns in national cultures. Methods. Studying nominations of a human with a zoonym component we used descriptive and analytical, comparative, linguistic and cultural methods and techniques. We used the contrastive method to identify phraseological units based on animal images in English and Uzbek languages


2016 ◽  
Vol 3 (1) ◽  
pp. 12-26 ◽  
Author(s):  
Malgorzata Z. Pajak ◽  
David Volgyes ◽  
Sally L. Pimlott ◽  
Carlos C. Salvador ◽  
Antonio S. Asensi ◽  
...  

Goals:This paper presents the performance review based on a dual-ring Positron Emission Tomography (PET) scanner being a part of Bruker Albira: a multi-modal small-animal imaging platform. Each ring of Albira PET contains eight detectors arranged as octagon, and each detector is built using a single continuous lutetium-yttrium oxyorthosilicate crystal and multi-anode photo multiplier tube. In two-ring configuration, the scanner covers 94.4 mm in axial- and 80´80 mm in trans-axial direction, which is sufficient to acquire images of small animals (e.g.mice) without the need of moving the animal bed during the scan.Methods:All measurements and majority of data processing were performed according to the NEMA NU4-2008 standard with one exception. Due to the scanner geometry, the spatial resolution test was reconstructed using iterative algorithm instead of the analytical one. The main performance characteristics were compared with those of the other PET sub-systems of tri-modal small-animal scanners.Results:The measured spatial resolution at the centre of the axial field of view in radial, tangential and axial directions was 1.72, 1.70 and 2.45 mm, respectively. The scatter fraction for the mouse-like phantom was 9.8% and for the rat-like phantom, 21.8%. The maximum absolute sensitivity was 5.30%. Finally, the recovery co-efficients for 5, 4, 3, 2, 1 mm diameter rods in image quality phantom were: 0.90, 0.77, 0.66, 0.30 and 0.05, respectively.Conclusion:The Bruker Albira is a versatile small-animal multi-modal device that can be used for variety of studies. Overall the PET sub-system provides a good spatial resolution coupled with better-than average sensitivity and the ability to produce good quality animal images when administering low activities.


Sign in / Sign up

Export Citation Format

Share Document