scholarly journals Jays and Oaks: an Eco-Ethological Study of a Symbiosis

Behaviour ◽  
1979 ◽  
Vol 70 (1-2) ◽  
pp. 1-116 ◽  
Author(s):  
I. Bossema

AbstractThe European jay (Garrulus g. glandarius) strongly depends on acorns for food. Many acorns are hoarded enabling the jay to feed upon them at times of the year in which they would otherwise be unavailable. Many of the hoarded acorns germinate and become seedlings so that jays play an important role in the dispersal of acorns and the reproduction of oaks (in this study: Quercus robur, the pedunculate oak). These mutual relationships were analysed both with wild jays in the field (province of Drente, The Netherlands) and with tame birds in confinement. Variation in the composition of the food throughout the year is described quantitatively. Acorns were the stock diet of adults in most months of the year. Leaf-eating caterpillars predominantly occurring on oak were the main food items of nestlings. Acorns formed the bulk of the food of fledglings in June. A high rate of acorn consumption in winter, spring and early summer becomes possible because individual jays hoard several thousands of acorns, mainly in October. In experiments, acorns of pedunculate oak were not preferred over equal sized acorns of sessile oak (which was not found in the study area). Acorns of pedunculate oak were strongly preferred over those of American oak and nuts of hazel and beech. Among acorns of pedunculate oak, ripe, sound, long-slim and big ones were preferred. Jays collect one or more (up to six) acorns per hoarding trip. In the latter case, the first ones are swallowed and the last one is usually carried in the bill. For swallowing the dimensions of the beak imposed a limit on size preference; for bill transport usually the biggest acorn was selected. The greater the number of acorns per trip, the longer was the transportation distance during hoarding. From trip to trip jays dispersed their acorns widely and when several acorns were transported during one trip, these were generally buried at different sites. Burial took place by pushing acorns in the soil and by subsequent hammering and covering. Jays often selected rather open sites, transitions in the vegetation and vertical structures such as saplings and tree trunks, for burial of acorns. In captivity jays also hoarded surplus food. Here, spacing out of burials was also observed; previously used sites usually being avoided. In addition, hiding along substrate edges and near conspicuous objects was observed. Jays tended to hide near sticks presented in a horizontal position rather than near identical ones in vertical position, especially when the colour of the sticks contrasted with the colour of the substrate. Also, rough surfaced substrate was strongly preferred over similar but smooth surfaced substrate. Successful retrieval of and feeding on hoarded acorns were observed in winter even when snow-cover had considerably altered the scenery. No evidence was obtained that acorns could be traced back by smell. Many indications were obtained that visual information from near and far beacons, memorized during hiding, was used in finding acorns. The use of beacons by captive jays was also studied. Experiments led to the conclusion that vertical beacons are more important to retrieving birds than identical horizontal ones. The discrepancy with the jay's preference for horizontal structures during hiding is discussed. Most seedlings emerge in May and June. The distribution pattern of seedlings and bill prints on the shells of their acorns indicated that many seedlings emerged from acorns hidden by jays in the previous autumn. The cotyledons of these plants remain underground and are in excellent condition in spring and early summer. Jays exploited acorns by pulling at the stem of seedlings and then removing the cotyledons. This did not usually damage the plants severely. Jays can find acorns in this situation partly because they remember where they buried acorns. In addition, it was shown that jays select seedlings of oak rather than ones of other species, and that they preferentially inspected those seedlings that were most profitable in terms of cotyledon yield and quality. Experiments uncovered some of the visual cues used in this discrimination. The effects of hoarding on the preservation of acorns were examined in the field and the laboratory. Being buried reduced the chance that acorns were robbed by conspecifics and other acorn feeders. Scatter hoarding did not lead to better protection of buried acorns than larder hoarding, but the spread of risk was better in the former than the latter. It was concluded that the way in which jays hoard acorns increases the chance that they can exploit them later. In addition, the condition of acorns is better preserved by being buried. An analysis was made of the consequences of the jay's behaviour for oaks. The oak does incur certain costs: some of its acorns are eaten by jays during the dispersal and storage phase, and some seedlings are damaged as a consequence of cotyledon removal. However, these costs are outweighed by the benefits the oak receives. Many of its most viable acorns are widely dispersed and buried at sites where the prospects for further development into mature oak are highly favourable. The adaptiveness of the characters involved in preferential feeding on and hoarding of acorns by jays is discussed in relation to several environmental pressures: competition with allied species; food fluctuations in the jay's niche; and food competitors better equipped to break up hard "dry" fruits. Reversely, jays exert several selective pressures which are likely to have evolutionary consequences for oaks, such as the selection of long-slim and large acorns with tight shells. In addition, oak seedlings with a long tap root and tough stem are selected for. Although other factors than mutual selective pressures between the two may have affected the present day fit between jays and oaks it is concluded that several characters of jays and oaks can be considered as co-adapted features of a symbiotic relationship.

2021 ◽  
Vol 40 (3) ◽  
pp. 211-217
Author(s):  
Brayden Whitlock

Arsenic is both a chemotherapeutic drug and an environmental toxicant that affects hundreds of millions of people each year. Arsenic exposure in drinking water has been called the worst poisoning in human history. How arsenic is handled in the body is frequently studied using rodent models to investigate how arsenic both causes and treats disease. These models, used in a variety of arsenic-related testing, from tumor formation to drug toxicity monitoring, have virtually always been developed from animals with telomeres that are unnaturally long, likely because of accidental artificial selective pressures. Mice that have been bred in captivity in laboratory conditions, often for over 100 years, are the standard in creating animal models for this research. Using these mice introduces challenges to any work that can be affected by the length of telomeres and the related capacities for tissue repair and cancer resistance. However, arsenic research is particularly susceptible to the misuse of such animal models due to the multiple and various interactions between arsenic and telomeres. Researchers in the field commonly find mouse models and humans behaving very differently upon exposure to acute and chronic arsenic, including drug therapies which seem safe in mice but are toxic in humans. Here, some complexities and apparent contradictions of the arsenic carcinogenicity and toxicity research are reconciled by an explanatory model that involves telomere length explained by the evolutionary pressures in laboratory mice. A low-risk hypothesis is proposed which has the power to determine whether researchers can easily develop more powerful and accurate mouse models by simply avoiding mouse lineages that are very old and have strangely long telomeres. Swapping in newer mouse lineages for the older, long-telomere mice may vastly improve our ability to research arsenic toxicity with virtually no increase in cost or difficulty of research.


2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


2017 ◽  
Vol 61 (7) ◽  
pp. 672-687 ◽  
Author(s):  
Ayellet Pelled ◽  
Tanya Zilberstein ◽  
Alona Tsirulnikov ◽  
Eran Pick ◽  
Yael Patkin ◽  
...  

The existing literature presents ambivalent evidence regarding the significance of visual cues, as opposed to textual cues, in the process of impression formation. While visual information may have a strong effect due to its vividness and immediate absorption, textual information might be more powerful due to its solid, unambiguous nature. This debate is particularly relevant in the context of online social networks, whose users share textual and visual elements. To explore our main research question, “Which elements of one’s Facebook profile have a more significant influence on impression formation of extroversion—pictures or texts?” we conducted two complementary online experiments, manipulating visual and textual cues inside and outside the context of Facebook. We then attempted to identify the relevant underlying mechanisms in impression formation. Our findings indicate that textual cues play a more dominant role online, whether via Facebook or not, supporting assertions of a new-media literacy that is text based. Additionally, we found the participants’ level of need for cognition influenced the effect such that individuals with a high need for cognition placed more emphasis on textual cues. The number of “likes” was also a significant predictor of perceptions of the individuals’ social orientation, especially when the other cues were ambiguous.


2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.


Neurology ◽  
2018 ◽  
Vol 90 (11) ◽  
pp. e977-e984 ◽  
Author(s):  
Motoyasu Honma ◽  
Yuri Masaoka ◽  
Takeshi Kuroda ◽  
Akinori Futamura ◽  
Azusa Shiromaru ◽  
...  

ObjectiveTo determine whether Parkinson disease (PD) affects cross-modal function of vision and olfaction because it is known that PD impairs various cognitive functions, including olfaction.MethodsWe conducted behavioral experiments to identify the influence of PD on cross-modal function by contrasting patient performance with age-matched normal controls (NCs). We showed visual effects on the strength and preference of odor by manipulating semantic connections between picture/odorant pairs. In addition, we used brain imaging to identify the role of striatal presynaptic dopamine transporter (DaT) deficits.ResultsWe found that odor evaluation in participants with PD was unaffected by visual information, while NCs overestimated smell when sniffing odorless liquid while viewing pleasant/unpleasant visual cues. Furthermore, DaT deficit in striatum, for the posterior putamen in particular, correlated to few visual effects in participants with PD.ConclusionsThese findings suggest that PD impairs cross-modal function of vision/olfaction as a result of posterior putamen deficit. This cross-modal dysfunction may serve as the basis of a novel precursor assessment of PD.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 57 ◽  
Author(s):  
Pia Hauck ◽  
Heiko Hecht

Sound by itself can be a reliable source of information about an object’s size. For instance, we are able to estimate the size of objects merely on the basis of the sound they make when falling on the floor. Moreover, loudness and pitch are crossmodally linked to size. We investigated if sound has an effect on size estimation even in the presence of visual information, that is if the manipulation of the sound produced by a falling object influences visual length estimation. Participants watched videos of wooden dowels hitting a hard floor and estimated their lengths. Sound was manipulated by (A) increasing (decreasing) overall sound pressure level, (B) swapping sounds among the different dowel lengths, and (C) increasing (decreasing) pitch. Results showed that dowels were perceived to be longer with increased sound pressure level (SPL), but there was no effect of swapped sounds or pitch manipulation. However, in a sound-only-condition, main effects of length and pitch manipulation were found. We conclude that we are able to perceive subtle differences in the acoustic properties of impact sounds and use them to deduce object size when visual cues are eliminated. In contrast, when visual cues are available, only loudness is potent enough to exercise a crossmodal influence on length perception.


2006 ◽  
Vol 95 (6) ◽  
pp. 3596-3616 ◽  
Author(s):  
Eiji Hoshi ◽  
Jun Tanji

We examined neuronal activity in the dorsal and ventral premotor cortex (PMd and PMv, respectively) to explore the role of each motor area in processing visual signals for action planning. We recorded neuronal activity while monkeys performed a behavioral task during which two visual instruction cues were given successively with an intervening delay. One cue instructed the location of the target to be reached, and the other indicated which arm was to be used. We found that the properties of neuronal activity in the PMd and PMv differed in many respects. After the first cue was given, PMv neuron response mostly reflected the spatial position of the visual cue. In contrast, PMd neuron response also reflected what the visual cue instructed, such as which arm to be used or which target to be reached. After the second cue was given, PMv neurons initially responded to the cue's visuospatial features and later reflected what the two visual cues instructed, progressively increasing information about the target location. In contrast, the activity of the majority of PMd neurons responded to the second cue with activity reflecting a combination of information supplied by the first and second cues. Such activity, already reflecting a forthcoming action, appeared with short latencies (<400 ms) and persisted throughout the delay period. In addition, both the PMv and PMd showed bilateral representation on visuospatial information and motor-target or effector information. These results further elucidate the functional specialization of the PMd and PMv during the processing of visual information for action planning.


2020 ◽  
pp. 026553222095150
Author(s):  
Aaron Olaf Batty

Nonverbal and other visual cues are well established as a critical component of human communication. Under most circumstances, visual information is available to aid in the comprehension and interpretation of spoken language. Citing these facts, many L2 assessment researchers have studied video-mediated listening tests through score comparisons with audio tests, by measuring the amount of time spent watching, and by attempting to determine examinee viewing behavior through self-reports. However, the specific visual cues to which examinees attend have heretofore not been measured objectively. The present research employs eye-tracking methodology to determine the amounts of time 12 participants viewed specific visual cues on a six-item, video-mediated L2 listening test. Seventy-two scanpath-overlaid videos of viewing behavior were manually coded for visual cues at 0.10-second intervals. Cued retrospective interviews based on eye-tracking data provided reasons for the observed behaviors. Faces were found to occupy the majority (81.74%) of visual dwell time, with participants largely splitting their time between the speaker’s eyes and mouth. Detected gesture viewing was negligible. The reason given for most viewing behavior was determining characters’ emotional states. These findings suggest that the primary difference between audio- and video-mediated L2 listening tests of conversational content is the absence or presence of facial expressions.


1993 ◽  
Vol 77 (3) ◽  
pp. 995-1020 ◽  
Author(s):  
Jean-Charles Chebat ◽  
Claire Gelinas-Chebat ◽  
Pierre Filiatrault

This study explores the interactive effects of musical and visual cues on time perception in a specific situation, that of waiting in a bank. Videotapes are employed to simulate the situation; a 2 × 3 factorial design ( N = 427) is used: 2 (high vs low) amounts of visual information and 2 (fast vs slow) levels of musical tempo in addition to a no-music condition. Two mediating variables are tested in the relation between the independent variables (musical and visual ones) and the dependent variable (perceived waiting time), mood and attention. Results of multivariate analysis of variance and a system of simultaneous equations show that musical cues and visual cues have no symmetrical effects: the musical tempo has a global (moderating) effect on the whole structure of the relations between dependent, independent, and mediating variables but has no direct influence on time perception. The visual cues affect time perception, the significance of which depends on musical tempo. Also, the “Resource Allocation Model of Time Estimation” predicts the attention-time relation better than Ornstein's “storage-size theory.” Mood state serves as a substitute for time information with slow music, but its effects are cancelled with fast music.


Sign in / Sign up

Export Citation Format

Share Document