relative weighting
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 23)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Vol 263 (2) ◽  
pp. 4717-4723
Author(s):  
Song Li ◽  
Roman Schlieper ◽  
Jürgen Peissig

Active noise cancellation (ANC) headphones are becoming increasingly important as they can effectively attenuate perceived ambient noise. Fixed filters are commonly applied in commercially available ANC headphones due to their robustness. However, they are not capable of adapting to changes that occur in dynamic environments, resulting in degraded ANC performance. In contrast, adaptive filters are able to update the ANC filters to compensate for noise in dynamic environments, but large estimation errors can occur due to a sudden change in direction/type of noise or secondary path. Some studies have suggested an ANC system by combining fixed and adaptive filters. Based on this mechanism, we propose a semi-adaptive ANC system in which the fixed and adaptive filters are weighted in real-time. Initially, the weighting for the fixed filter dominates the whole system to ensure the robustness of the ANC system. Then, the residual error provided by the adaptive filter is simulated and compared to the real measured one to determine the relative weighting between the fixed and adaptive filters. In this study, this approach is applied to a feedback ANC system. Simulation results show that our proposed approach achieves high noise attenuation performance while maintaining robustness with time-varying secondary paths.


Author(s):  
Bojadzievski Andonova ◽  
Ramesh kulkarni

In engineering, the interdisciplinary essence of the Computing and Natural Science (CNS) as well as its relations with other fields are described. This paper presents a discussion of the phases by which CNS education evolve from the recognition of initial growth in the '80's to current growth. The limitations and potential benefits of varying CNS education methodologies are addressed, and so is the advancement of the number of the foundational elements, which are common to most strategies. The CNS course content, grades and curriculum are examined and all bachelors’ programs are surveyed. The curricula of the various programs are examined and discussed for their relative weighting for the standard "toolkit."


Author(s):  
Amichai Cohen ◽  
David Zlotogorski

When considering the term “proportionality,” most people intuitively assume that a quantitative value that can be attached to it. In this chapter, this assumption is questioned. First, the chapter presents empirical evidence that indicates that there is no agreement on a specific numerical formula among IHL experts or military officers. Second, the chapter critically evaluates attempts to create a formula for evaluating proportionality. Third, the chapter discusses the use of “rules of thumb” to reduce the uncertainties of the application of proportionality. We conclude that the principle is inherently vague, and intentionally so. No amount of knowledge or experience can lead to the same results, nor is it the goal of the principle of proportionality to achieve the same results across the board. In this regard, proportionality in IHL is not about numbers so much as it is about ensuring the undertaking of a process that weighs competing interests. The relative weighting to be used in any given case is intentionally left open, beyond the purview of the principle.


Author(s):  
Xiao Cai ◽  
Yulong Yin ◽  
Qingfang Zhang

Purpose Speech production requires the combined efforts of feedforward control and feedback control subsystems. The primary purpose of this study is to explore whether the relative weighting of auditory feedback control is different between the first language (L1) and the second language (L2) production for late bilinguals. The authors also make an exploratory investigation into how bilinguals' speech fluency and speech perception relate to their auditory feedback control. Method Twenty Chinese–English bilinguals named Chinese or English bisyllabic words, while being exposed to 30- or 60-dB unexpected brief masking noise. Variables of language (L1 or L2) and noise condition (quiet, weak noise, or strong noise) were manipulated in the experiment. L1 and L2 speech fluency tests and an L2 perception test were also included to measure bilinguals' speech fluency and auditory acuity. Results Peak intensity analyses indicated that the intensity increases in the weak noise and strong noise conditions were larger in L2-English than L1-Chinese production. Intensity contour analysis showed that the intensity increases in both languages had an onset around 80–140 ms, a peak around 220–250 ms, and persisted till 400 ms post vocalization onset. Correlation analyses also revealed that poorer speech fluency or L2 auditory acuity was associated with larger Lombard effect. Conclusions For late bilinguals, the reliance on auditory feedback control is heavier in L2 than in L1 production. We empirically supported a relation between speech fluency and the relative weighting of auditory feedback control, and provided the first evidence for the production–perception link in L2 speech motor control.


Languages ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 67
Author(s):  
Claudia Felser ◽  
Anna Jessen

Coordinated subjects often show variable number agreement with the finite verb, but linguistic approaches to this phenomenon have rarely been informed by systematically collected data. We report the results from three experiments investigating German speakers’ agreement preferences with complex subjects joined by the correlative conjunctions sowohl…als auch (‘both…and’), weder…noch (‘neither…nor’) or entweder…oder (‘either…or’). We examine to what extent conjunction type and a conjunct’s relative proximity to the verb affect the acceptability and processibility of singular vs. plural agreement. Experiment 1 was an untimed acceptability rating task, Experiment 2 a timed sentence completion task, and Experiment 3 was a self-paced reading task. Taken together, our results show that number agreement with correlative coordination in German is primarily determined by a default constraint triggering plural agreement, which interacts with linear order and semantic factors. Semantic differences between conjunctions only affected speakers’ agreement preferences in the absence of processing pressure but not their initial agreement computation. The combined results from our offline and online experimental measures of German speakers’ agreement preferences suggest that the constraints under investigation do not only differ in their relative weighting but also in their relative timing during agreement computation.


2021 ◽  
Vol 288 (1947) ◽  
Author(s):  
Johanna T. Schultz ◽  
Hendrik K. Beck ◽  
Tina Haagensen ◽  
Tasmin Proost ◽  
Christofer J. Clemente

Locomotion is a key aspect associated with ecologically relevant tasks for many organisms, therefore, survival often depends on their ability to perform well at these tasks. Despite this significance, we have little idea how different performance tasks are weighted when increased performance in one task comes at the cost of decreased performance in another. Additionally, the ability for natural systems to become optimized to perform a specific task can be limited by structural, historic or functional constraints. Climbing lizards provide a good example of these constraints as climbing ability likely requires the optimization of tasks which may conflict with one another such as increasing speed, avoiding falls and reducing the cost of transport (COT). Understanding how modifications to the lizard bauplan can influence these tasks may allow us to understand the relative weighting of different performance objectives among species. Here, we reconstruct multiple performance landscapes of climbing locomotion using a 10 d.f. robot based upon the lizard bauplan, including an actuated spine, shoulders and feet, the latter which interlock with the surface via claws. This design allows us to independently vary speed, foot angles and range of motion (ROM), while simultaneously collecting data on climbed distance, stability and efficiency. We first demonstrate a trade-off between speed and stability, with high speeds resulting in decreased stability and low speeds an increased COT. By varying foot orientation of fore- and hindfeet independently, we found geckos converge on a narrow optimum of foot angles (fore 20°, hind 100°) for both speed and stability, but avoid a secondary wider optimum (fore −20°, hind −50°) highlighting a possible constraint. Modifying the spine and limb ROM revealed a gradient in performance. Evolutionary modifications in movement among extant species over time appear to follow this gradient towards areas which promote speed and efficiency.


2021 ◽  
Vol 2 ◽  
Author(s):  
Emily A. Keshner ◽  
Anouk Lamontagne

Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.


Author(s):  
Joost de Jong ◽  
Elkan G. Akyürek ◽  
Hedderik van Rijn

AbstractEstimation of time depends heavily on both global and local statistical context. Durations that are short relative to the global distribution are systematically overestimated; durations that are locally preceded by long durations are also overestimated. Context effects are prominent in duration discrimination tasks, where a standard duration and a comparison duration are presented on each trial. In this study, we compare and test two models that posit a dynamically updating internal reference that biases time estimation on global and local scales in duration discrimination tasks. The internal reference model suggests that the internal reference operates during postperceptual stages and only interacts with the first presented duration. In contrast, a Bayesian account of time estimation implies that any perceived duration updates the internal reference and therefore interacts with both the first and second presented duration. We implemented both models and tested their predictions in a duration discrimination task where the standard duration varied from trial to trial. Our results are in line with a Bayesian perspective on time estimation. First, the standard systematically biased estimation of the comparison, such that shorter standards increased the likelihood of reporting that the comparison was shorter. Second, both the previous standard and comparison systematically biased time estimation of subsequent trials in the same direction. Third, more precise observers showed smaller biases. In sum, our findings suggest a common dynamic prior for time that is updated by each perceived duration and where the relative weighting of old and new observations is determined by their relative precision.


2021 ◽  
Vol 12 (1) ◽  
pp. 17
Author(s):  
Michael Peeters ◽  
M Kenneth Cor ◽  
Erik Maki

Description of the Problem: High-stakes decision-making should have sound validation evidence; reliability is vital towards this. A short exam may not be very reliable on its own within didactic courses, and so supplementing it with quizzes might help. But how much? This study’s objective was to understand how much reliability (for the overall module-grades) could be gained by adding quiz data to traditional exam data in a clinical-science module. The Innovation: In didactic coursework, quizzes are a common instructional strategy. However, individual contexts/instructors can vary quiz use formatively and/or summatively. Second-year PharmD students took a clinical-science course, wherein a 5-week module focused on cardiovascular therapeutics. Generalizability Theory (G-Theory) combined seven quizzes leading to an exam into one module-level reliability, based on a model where students were crossed with items nested in eight fixed testing occasions (mGENOVA used). Furthermore, G-Theory decision-studies were planned to illustrate changes in module-grade reliability, where the number of quiz-items and relative-weighting of quizzes were altered. Critical Analysis: One-hundred students took seven quizzes and one exam. Individually, the exam had 32 multiple-choice questions (MCQ) (KR-20 reliability=0.67), while quizzes had a total of 50MCQ (5-9MCQ each) with most individual quiz KR-20s less than or equal to 0.54. After combining the quizzes and exam using G-Theory, estimated reliability of module-grades was 0.73; improved from the exam alone. Doubling the quiz-weight, from the syllabus’ 18% quizzes and 82% exam, increased the composite-reliability of module-grades to 0.77. Reliability of 0.80 was achieved with equal-weight for quizzes and exam. Next Steps: Expectedly, more items lent to higher reliability. However, using quizzes predominantly formatively had little impact on reliability, while using quizzes more summatively (i.e., increasing their relative-weight in module-grade) improved reliability further. Thus, depending on use, quizzes can add to a course’s rigor.


2021 ◽  
Author(s):  
Ian Krajbich

Standard decision models include two components: subjective-value (utility) functions and stochastic choice rules. The first establishes the relative weighting of the attributes or dimensions and the second determines how consistently the higher utility option is chosen. For a decision problem with M attributes, researchers often estimate M-1 utility parameters and separately estimate a choice-consistency parameter. Instead, researchers sometimes estimate M parameters in the utility function and neglect choice consistency. I argue that while these two approaches are mathematically identical, the latter conflates utility and consistency parameters, leading to ambiguous interpretations and conclusions. At the same time, behavior arises from the interaction of utility and consistency parameters, so for choice prediction they should not be considered in isolation. Overall, I advocate for a clear separation between utility functions and stochastic choice rules when modeling decision-making, and reinforce the notion that researchers should use M-1 parameters for M-attribute decision problems.


Sign in / Sign up

Export Citation Format

Share Document