scholarly journals New Caledonian crows infer the weight of objects from observing their movements in a breeze

2019 ◽  
Vol 286 (1894) ◽  
pp. 20182332 ◽  
Author(s):  
Sarah A. Jelbert ◽  
Rachael Miller ◽  
Martina Schiestl ◽  
Markus Boeckle ◽  
Lucy G. Cheke ◽  
...  

Humans use a variety of cues to infer an object's weight, including how easily objects can be moved. For example, if we observe an object being blown down the street by the wind, we can infer that it is light. Here, we tested whether New Caledonian crows make this type of inference. After training that only one type of object (either light or heavy) was rewarded when dropped into a food dispenser, birds observed pairs of novel objects (one light and one heavy) suspended from strings in front of an electric fan. The fan was either on—creating a breeze which buffeted the light, but not the heavy, object—or off, leaving both objects stationary. In subsequent test trials, birds could drop one, or both, of the novel objects into the food dispenser. Despite having no opportunity to handle these objects prior to testing, birds touched the correct object (light or heavy) first in 73% of experimental trials, and were at chance in control trials. Our results suggest that birds used pre-existing knowledge about the behaviour exhibited by differently weighted objects in the wind to infer their weight, using this information to guide their choices.

2017 ◽  
Vol 45 (3) ◽  
pp. 581-609 ◽  
Author(s):  
Sarah J. OWENS ◽  
Justine M. THACKER ◽  
Susan A. GRAHAM

AbstractSpeech disfluencies can guide the ways in which listeners interpret spoken language. Here, we examined whether three-year-olds, five-year-olds, and adults use filled pauses to anticipate that a speaker is likely to refer to a novel object. Across three experiments, participants were presented with pairs of novel and familiar objects and heard a speaker refer to one of the objects using a fluent (“Look at the ball/lep!”) or disfluent (“Look at thee uh ball/lep!”) expression. The salience of the speaker's unfamiliarity with the novel referents, and the way in which the speaker referred to the novel referents (i.e., a noun vs. a description) varied across experiments. Three- and five-year-olds successfully identified familiar and novel targets, but only adults’ looking patterns reflected increased looks to novel objects in the presence of a disfluency. Together, these findings demonstrate that adults, but not young children, use filled pauses to anticipate reference to novel objects.


2020 ◽  
Vol 34 (07) ◽  
pp. 10494-10501
Author(s):  
Tingjia Cao ◽  
Ke Han ◽  
Xiaomei Wang ◽  
Lin Ma ◽  
Yanwei Fu ◽  
...  

This paper studies the task of image captioning with novel objects, which only exist in testing images. Intrinsically, this task can reflect the generalization ability of models in understanding and captioning the semantic meanings of visual concepts and objects unseen in training set, sharing the similarity to one/zero-shot learning. The critical difficulty thus comes from that no paired images and sentences of the novel objects can be used to help train the captioning model. Inspired by recent work (Chen et al. 2019b) that boosts one-shot learning by learning to generate various image deformations, we propose learning meta-networks for deforming features for novel object captioning. To this end, we introduce the feature deformation meta-networks (FDM-net), which is trained on source data, and learn to adapt to the novel object features detected by the auxiliary detection model. FDM-net includes two sub-nets: feature deformation, and scene graph sentence reconstruction, which produce the augmented image features and corresponding sentences, respectively. Thus, rather than directly deforming images, FDM-net can efficiently and dynamically enlarge the paired images and texts by learning to deform image features. Extensive experiments are conducted on the widely used novel object captioning dataset, and the results show the effectiveness of our FDM-net. Ablation study and qualitative visualization further give insights of our model.


2002 ◽  
Vol 14 (6) ◽  
pp. 875-886 ◽  
Author(s):  
Patrik Vuilleumier ◽  
Sophie Schwartz ◽  
Karen Clarke ◽  
Masud Husain ◽  
Jon Driver

Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.


2010 ◽  
Vol 38 (2) ◽  
pp. 273-296 ◽  
Author(s):  
CARMEN MARTÍNEZ-SUSSMANN ◽  
NAMEERA AKHTAR ◽  
GIL DIESENDRUCK ◽  
LORI MARKSON

ABSTRACTChildren as young as two years of age are able to learn novel object labels through overhearing, even when distracted by an attractive toy (Akhtar, 2005). The present studies varied the information provided about novel objects and examined which elements (i.e. novel versus neutral information and labels versus facts) toddlers chose to monitor, and what type of information they were more likely to learn. In Study 1, participants learned only the novel label and the novel fact containing a novel label. In Study 2, only girls learned the novel label. Neither girls nor boys learned the novel fact. In both studies, analyses of children's gaze patterns suggest that children who learned the new information strategically oriented to the third-party conversation.


2020 ◽  
Vol 34 (07) ◽  
pp. 11709-11716
Author(s):  
Ruotian Luo ◽  
Ning Zhang ◽  
Bohyung Han ◽  
Linjie Yang

We present a novel problem setting in zero-shot learning, zero-shot object recognition and detection in the context. Contrary to the traditional zero-shot learning methods, which simply infers unseen categories by transferring knowledge from the objects belonging to semantically similar seen categories, we aim to understand the identity of the novel objects in an image surrounded by the known objects using the inter-object relation prior. Specifically, we leverage the visual context and the geometric relationships between all pairs of objects in a single image, and capture the information useful to infer unseen categories. We integrate our context-aware zero-shot learning framework into the traditional zero-shot learning techniques seamlessly using a Conditional Random Field (CRF). The proposed algorithm is evaluated on both zero-shot region classification and zero-shot detection tasks. The results on Visual Genome (VG) dataset show that our model significantly boosts performance with the additional visual context compared to traditional methods.


Author(s):  
Shinji Kawakura ◽  
Ryosuke Shibasaki

In this study, we attempt to develop a deep learning-based self-driving car system to deliver items (e.g., harvested onions, agri-tools, PET bottles) to agricultural (agri-) workers at an agri-workplace. The system is based around a car-shaped robot, JetBot, with an NVIDIA artificial intelligence (AI) oriented board. JetBot can find diverse objects and avoid them. We implemented experimental trials at a real warehouse where various items (glove, boot, sickle (falx), scissors, and hoe), called obstacles, were scattered. The assumed agri-worker was a man suspending dried onions on a beam. Specifically, we developed a system focusing on the function of precisely detecting obstacles with deep learning-based techniques (techs), self-avoidance, and automatic delivery of small items for manual agri-workers and managers. Both the car-shaped figure and the deep learning-based obstacles-avoidance function differ from existing mobile agri-machine techs and products with respect to their main aims and structural features. Their advantages are their low costs in comparison with past similar mechanical systems found in the literature and similar commercial goods. The robot is extremely agile and easily identifies and learns obstacles. Additionally, the JetBot kit is a minimal product and includes a feature allowing users to arbitrarily expand and change functions and mechanical settings. This study consists of six phases: (1) designing and confirming the validity of the entire system, (2) constructing and tuning various minor system settings (e.g., programs and JetBot specifications), (3) accumulating obstacle picture data, (4) executing deep learning, (5) conducting experiments in an indoor warehouse to simulate a real agri-working situation, and (6) assessing and discussing the trial data quantitatively (presenting the success and error rates of the trials) and qualitatively. We consider that from the limited trials, the system can be judged as valid to some extent in certain situations. However, we were unable to perform more broad or generalizable experiments (e.g., execution at mud farmlands and running JetBot on non-flat floor). We present experimental ranges for the success ratio of these trials, particularly noting crashed obstacle types and other error types. We were also able to observe features of the system’s practical operations. The novel achievements of this study lie in the fusion of recent deep learning-based agricultural informatics. In the future, agri-workers and their managers could use the proposed system in real agri-places as a common automatic delivering system. Furthermore, we believe, by combining this application with other existing systems, future agri-fields and other workplaces could become more comfortable and secure (e.g., delivering water bottles could avoid heat (stress) disorders).


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4454 ◽  
Author(s):  
Belinda A. Hall ◽  
Vicky Melfi ◽  
Alicia Burns ◽  
David M. McGill ◽  
Rebecca E. Doyle

The personality trait of curiosity has been shown to increase welfare in humans. If this positive welfare effect is also true for non-humans, animals with high levels of curiosity may be able to cope better with stressful situations than their conspecifics. Before discoveries can be made regarding the effect of curiosity on an animal’s ability to cope in their environment, a way of measuring curiosity across species in different environments must be created to standardise testing. To determine the suitability of novel objects in testing curiosity, species from different evolutionary backgrounds with sufficient sample sizes were chosen. Barbary sheep (Ammotragus lervia) n = 12, little penguins (Eudyptula minor) n = 10, ringtail lemurs (Lemur catta) n = 8,red tailed black cockatoos (Calyptorhynchus banksia) n = 7, Indian star tortoises (Geochelone elegans) n = 5 and red kangaroos (Macropus rufus) n = 5 were presented with a stationary object, a moving object and a mirror. Having objects with different characteristics increased the likelihood individuals would find at least one motivating. Conspecifics were all assessed simultaneously for time to first orientate towards object (s), latency to make contact (s), frequency of interactions, and total duration of interaction (s). Differences in curiosity were recorded in four of the six species; the Barbary sheep and red tailed black cockatoos did not interact with the novel objects suggesting either a low level of curiosity or that the objects were not motivating for these animals. Variation in curiosity was seen between and within species in terms of which objects they interacted with and how long they spent with the objects. This was determined by the speed in which they interacted, and the duration of interest. By using the measure of curiosity towards novel objects with varying characteristics across a range of zoo species, we can see evidence of evolutionary, husbandry and individual influences on their response. Further work to obtain data on multiple captive populations of a single species using a standardised method could uncover factors that nurture the development of curiosity. In doing so, it would be possible to isolate and modify sub-optimal husbandry practices to improve welfare in the zoo environment.


1999 ◽  
Vol 1999 ◽  
pp. 178-178 ◽  
Author(s):  
R. M. Forde ◽  
J. N. Marchant ◽  
H. A. M. Spoolder

The ‘standard’ human approach test has been used extensively since the early 1980's to assess fear responses in most farmed species. However, in recent years, there has been considerable debate questioning its efficacy given the short duration of the familiarisation period and the suitability of the location of the testing environment, i.e. the home pen versus a novel arena (Pedersen, 1997). It is possible that the test simply reflects an animal's level of motivation to explore the novel arena and any novel objects therein rather than a specific response to the presence of a human. This work addresses both the length of acclimatisation period and the location of the test arena.


Animals ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 164 ◽  
Author(s):  
Anne Schrimpf ◽  
Marie-Sophie Single ◽  
Christian Nawroth

Dogs and cats use human emotional information directed to an unfamiliar situation to guide their behavior, known as social referencing. It is not clear whether other domestic species show similar socio-cognitive abilities in interacting with humans. We investigated whether horses (n = 46) use human emotional information to adjust their behavior to a novel object and whether the behavior of horses differed depending on breed type. Horses were randomly assigned to one of two groups: an experimenter positioned in the middle of a test arena directed gaze and voice towards the novel object with either (a) a positive or (b) a negative emotional expression. The duration of subjects’ position to the experimenter and the object in the arena, frequency of gazing behavior, and physical interactions (with either object or experimenter) were analyzed. Horses in the positive condition spent more time between the experimenter and object compared to horses in the negative condition, indicating less avoidance behavior towards the object. Horses in the negative condition gazed more often towards the object than horses in the positive condition, indicating increased vigilance behavior. Breed types differed in their behavior: thoroughbreds showed less human-directed behavior than warmbloods and ponies. Our results provide evidence that horses use emotional cues from humans to guide their behavior towards novel objects.


2020 ◽  
Vol 40 (3) ◽  
pp. 251-274
Author(s):  
Jenna L. Wall ◽  
William E. Merriman

When taught a label for an object, and later asked whether that object or a novel object is the referent of a novel label, preschoolers favor the novel object. This article examines whether this so-called disambiguation effect may be undermined by an expectation to communicate about a discovery. This expectation may explain why 4-year-olds do not show the disambiguation effect if a sense modality shift occurs between training and test. In Study 1, 3- and 4-year-olds learned a label for a visible object, then examined two hidden objects manually and predicted which one they would be asked about. Only the older group predicted that they would be asked about the object that matched the visible object. Study 1 also included a test of the standard disambiguation effect, where both the training and test objects were visible. Both 3- and 4-year-olds showed a weaker disambiguation effect in this test when the matching object was unexpected rather than expected. In Study 2, both age groups predicted they would be asked about this object when it was unexpected. In Study 3, both age groups showed a stronger disambiguation effect when allowed to communicate about this object before deciding which object was the referent of a novel label. Metacognitive ability predicted the strength of this disambiguation effect even after controlling for age and inhibitory control. The article discusses various explanations for why only 4-year-olds abided by the pragmatics of discovery in the test of the cross-modal disambiguation effect, but both 3- and 4-year-olds abided by it in the test of the standard disambiguation effect.


Sign in / Sign up

Export Citation Format

Share Document