inconsistent objects
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Võ

AbstractWhile scene context is known to facilitate object recognition, little is known about which contextual “ingredients” are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses—markers of semantic violations—for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing—even in the absence of spatial scene structure and object content—suggesting that material is one of the contextual “ingredients” driving scene context effects.


2021 ◽  
pp. 1-18
Author(s):  
Massimiliano Carrara ◽  
Filippo Mancini ◽  
Jeroen Smid

Graham Priest has recently proposed a solution to the problem of the One and the Many which involves inconsistent objects and a non-transitive identity relation. We show that his solution entails either that the object everything is identical with the object nothing or that they are mutual parts; depending on whether Priest goes for an extensional or a non-extensional mereology.


2021 ◽  
Author(s):  
Lixiang Chen ◽  
Radoslaw Martin Cichy ◽  
Daniel Kaiser

AbstractDuring natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception, and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/N400 differences are indicative of changes in the cortical representation of objects. In two experiments, we recorded EEG signals while participants viewed semantically consistent or inconsistent objects within a scene; in Experiment 1, these objects were task-irrelevant, while in Experiment 2, they were directly relevant for behavior. In both experiments, we found reliable and comparable N300/400 differences between consistent and inconsistent scene-object combinations. To probe the quality of object representations, we performed multivariate classification analyses, in which we decoded the category of the objects contained in the scene. In Experiment 1, in which the objects were not task-relevant, object category could be decoded from around 100 ms after the object presentation, but no difference in decoding performance was found between consistent and inconsistent objects. By contrast, when the objects were task-relevant in Experiment 2, we found enhanced decoding of semantically consistent, compared to semantically inconsistent, objects. These results show that differences in N300/N400 components related to scene-object consistency do not index changes in cortical object representations, but rather reflect a generic marker of semantic violations. Further, our findings suggest that facilitatory effects between objects and scenes are task-dependent rather than automatic.


2021 ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Vo

While scene context is known to facilitate object recognition, little is known about whichcontextual “ingredients” are at the heart of this phenomenon. Here, we address the question ofwhether the materials that frequently occur in scenes (e.g., tiles in bathroom) associated withspecific objects (e.g., a perfume) are relevant for processing of that object. To this end, wepresented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone)superimposed on scenes (e.g., bathroom) and close-ups of materials (e.g., tiles). In Experiment1, consistent objects on scenes were named more accurately than inconsistent ones, while therewas only a marginal consistency effect for objects on materials. Also, we did not find anyconsistency effect for scrambled materials that served as color control condition. In Experiment2, we recorded event-related potentials (ERPs) and found N300/N400 responses – markers ofsemantic violations – for objects on inconsistent relative to consistent scenes. Critically, objectson materials triggered N300/N400 responses of similar magnitudes. Our findings show thatcontextual materials indeed affect object processing – even in the absence of spatial scenestructure and object content – suggesting that material is one of the contextual “ingredients”driving scene context effects.


2021 ◽  
Vol 19 (2) ◽  
pp. 1720-1744
Author(s):  
Xiao Jian Tan ◽  
◽  
Nazahah Mustafa ◽  
Mohd Yusoff Mashor ◽  
Khairul Shakir Ab Rahman ◽  
...  

<abstract> <p>Based on the Nottingham Histopathology Grading (NHG) system, mitosis cells detection is one of the important criteria to determine the grade of breast carcinoma. Mitosis cells detection is a challenging task due to the heterogeneous microenvironment of breast histopathology images. Recognition of complex and inconsistent objects in the medical images could be achieved by incorporating domain knowledge in the field of interest. In this study, the strategies of the histopathologist and domain knowledge approach were used to guide the development of the image processing framework for automated mitosis cells detection in breast histopathology images. The detection framework starts with color normalization and hyperchromatic nucleus segmentation. Then, a knowledge-assisted false positive reduction method is proposed to eliminate the false positive (i.e., non-mitosis cells). This stage aims to minimize the percentage of false positive and thus increase the F1-score. Next, features extraction was performed. The mitosis candidates were classified using a Support Vector Machine (SVM) classifier. For evaluation purposes, the knowledge-assisted detection framework was tested using two datasets: a custom dataset and a publicly available dataset (i.e., MITOS dataset). The proposed knowledge-assisted false positive reduction method was found promising by eliminating at least 87.1% of false positive in both the dataset producing promising results in the F1-score. Experimental results demonstrate that the knowledge-assisted detection framework can achieve promising results in F1-score (custom dataset: 89.1%; MITOS dataset: 88.9%) and outperforms the recent works.</p> </abstract>


2020 ◽  
Author(s):  
Wilma A. Bainbridge ◽  
Wan Y. Kwok ◽  
Chris I. Baker

AbstractHumans are highly sensitive to the statistical relationships between features and objects within visual scenes. Inconsistent objects within scenes (e.g., a mailbox in a bedroom) instantly jump out to us, and are known to capture our attention. However, it is debated whether such semantic inconsistencies result in boosted memory for the scene, impaired memory, or have no influence on memory. Here, we examined the relationship of scene-object consistencies on memory representations measured through drawings made during recall. Participants (N=30) were eye-tracked while studying 12 real-world scene images with an added object that was either semantically consistent or inconsistent. After a 6-minute distractor task, they drew the scenes from memory while pen movements were tracked electronically. Online scorers (N=1,725) rated each drawing for diagnosticity, object detail, spatial detail, and memory errors. Inconsistent scenes were recalled more frequently, but contained less object detail. Further, inconsistent objects elicited more errors reflecting looser memory binding (e.g., migration across images). These results point to a dual effect in memory of boosted global (scene) but diminished local (object) information. Finally, we replicated prior effects showing that inconsistent objects captured eye fixations, but found that fixations during study were not correlated with recall performance, time, or drawing order. In sum, these results show a nuanced effect of scene inconsistencies on memory detail during recall.


2020 ◽  
Vol 32 (4) ◽  
pp. 571-589 ◽  
Author(s):  
Moreno I. Coco ◽  
Antje Nuthmann ◽  
Olaf Dimigen

In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object ( t) and by the preceding fixation ( t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.


2019 ◽  
Author(s):  
Mahsa Barzy ◽  
Heather Jane Ferguson ◽  
David Williams ◽  
Jo Black

Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults, and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker’s voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice-consistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker’s voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240ms vs 1800ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects.


2018 ◽  
Vol 22 (1) ◽  
pp. 07-34 ◽  
Author(s):  
Henrique Antunes

In this paper I sketch some lines of response to Mark Colyvan’s (2008) indispensability  arguments for the existence of inconsistent objects, being mainly concerned with the indispens ability of inconsistent mathematical entities. My response will draw heavily on Jody Azzouni’s (2004) deflationary nominalism.


2018 ◽  
Author(s):  
Moreno I. Coco ◽  
Antje Nuthmann ◽  
Olaf Dimigen

In vision science, a topic of great interest and considerable controversy is the processing of objects that are (in)consistent with the overall meaning of the scene in which they occur. How quickly can we access the semantic properties of objects and does this happen before the object is directly looked at? Here we brought novel evidences to this debate by co-registering eye-movements and EEG while participants freely explored photographs of indoor scenes. Each scene contained a target object that was either consistent or inconsistent with the scene context (e.g., toothpaste vs. flashlight in a bathroom). Eye-movement behaviour showed that inconsistent objects were not only more effortful to process (i.e., looked at longer), but also prioritized over consistent objects while they were still in extrafoveal vision (i.e., looked at earlier after scene onset). In fixation-related brain potentials (FRPs), we used a linear deconvolution technique to compare the activity elicited by semantically consistent versus inconsistent objects at scene onset (i.e. in the stimulus-locked ERP), during the target fixation (t), and during the preceding fixation (t-1). We demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicited a larger negative shift of FRP activity than consistent objects, already at the pre-target fixation t-1 and throughout fixation t. Taken together, our results suggest that object meaning can be, at least partly, extracted in extrafoveal vision, and highlight how attentional and neural mechanisms can be simultaneously investigated to uncover the mechanisms of object-scene integration during natural viewing.


Sign in / Sign up

Export Citation Format

Share Document