scholarly journals Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic input

2021 ◽  
Vol 118 (7) ◽  
pp. e2001844118
Author(s):  
Thomas Schatz ◽  
Naomi H. Feldman ◽  
Sharon Goldwater ◽  
Xuan-Nga Cao ◽  
Emmanuel Dupoux

Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than nonnative ones. For example, between 6 to 8 mo and 10 to 12 mo, infants learning American English get better at distinguishing English and [l], as in “rock” vs. “lock,” relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories—like and [l] in English—through a statistical clustering mechanism dubbed “distributional learning.” The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here, we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning, as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that, contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants’ attunement.

2019 ◽  
Author(s):  
Thomas Schatz ◽  
Naomi Feldman ◽  
Sharon Goldwater ◽  
Xuan Nga Cao ◽  
Emmanuel Dupoux

Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than non-native ones. For example, between 6-8 months and 10-12 months, infants learning American English get better at distinguishing English [ɹ] and [l], as in ‘rock’ vs ‘lock’, relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories—like [ɹ] and [l] in English—through a statistical clustering mechanism dubbed ‘distributional learning’. The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a novel mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows, for the first time, accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants’ attunement.


Author(s):  
Hiromitsu Hattori ◽  
◽  
Yuu Nakajima ◽  
Shohei Yamane

As it is getting easier to obtain reams of data on human behavior via ubiquitous devices, it is becoming obvious that we must work on two conflicting research directions for realizing multiagent-based social simulations; creating large-scale simulations and elaborating fine-scale human behavior models. The challenge in this paper is to achievemassively urban traffic simulations with fine-grained levels of driving behavior. Toward our objective, we show the design and implementation of a multiagent-based simulation platform, that enables us to execute massive but sophisticated multiagent traffic simulations. We show the capability of the developed platform to reproduce the urban traffic with a social experiment scenario. We investigate its potential to analyze the traffic from both macroscopic and microscopic viewpoints.


Author(s):  
Jian Tao ◽  
Werner Benger ◽  
Kelin Hu ◽  
Edwin Mathews ◽  
Marcel Ritter ◽  
...  

2019 ◽  
Vol 22 (3) ◽  
pp. 365-380 ◽  
Author(s):  
Matthias Olthaar ◽  
Wilfred Dolfsma ◽  
Clemens Lutz ◽  
Florian Noseleit

In a competitive business environment at the Bottom of the Pyramid smallholders supplying global value chains may be thought to be at the whims of downstream large-scale players and local market forces, leaving no room for strategic entrepreneurial behavior. In such a context we test the relationship between the use of strategic resources and firm performance. We adopt the Resource Based Theory and show that seemingly homogenous smallholders deploy resources differently and, consequently, some do outperform others. We argue that the ‘resource-based theory’ results in a more fine-grained understanding of smallholder performance than approaches generally applied in agricultural economics. We develop a mixed-method approach that allows one to pinpoint relevant, industry-specific resources, and allows for empirical identification of the relative contribution of each resource to competitive advantage. The results show that proper use of quality labor, storage facilities, time of selling, and availability of animals are key capabilities.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 991
Author(s):  
Peidong Zhu ◽  
Peng Xun ◽  
Yifan Hu ◽  
Yinqiao Xiong

A large-scale Cyber-Physical System (CPS) such as a smart grid usually provides service to a vast number of users as a public utility. Security is one of the most vital aspects in such critical infrastructures. The existing CPS security usually considers the attack from the information domain to the physical domain, such as injecting false data to damage sensing. Social Collective Attack on CPS (SCAC) is proposed as a new kind of attack that intrudes into the social domain and manipulates the collective behavior of social users to disrupt the physical subsystem. To provide a systematic description framework for such threats, we extend MITRE ATT&CK, the most used cyber adversary behavior modeling framework, to cover social, cyber, and physical domains. We discuss how the disinformation may be constructed and eventually leads to physical system malfunction through the social-cyber-physical interfaces, and we analyze how the adversaries launch disinformation attacks to better manipulate collective behavior. Finally, simulation analysis of SCAC in a smart grid is provided to demonstrate the possibility of such an attack.


SLEEP ◽  
2021 ◽  
Author(s):  
Dorothee Fischer ◽  
Elizabeth B Klerman ◽  
Andrew J K Phillips

Abstract Study Objectives Sleep regularity predicts many health-related outcomes. Currently, however, there is no systematic approach to measuring sleep regularity. Traditionally, metrics have assessed deviations in sleep patterns from an individual’s average. Traditional metrics include intra-individual standard deviation (StDev), Interdaily Stability (IS), and Social Jet Lag (SJL). Two metrics were recently proposed that instead measure variability between consecutive days: Composite Phase Deviation (CPD) and Sleep Regularity Index (SRI). Using large-scale simulations, we investigated the theoretical properties of these five metrics. Methods Multiple sleep-wake patterns were systematically simulated, including variability in daily sleep timing and/or duration. Average estimates and 95% confidence intervals were calculated for six scenarios that affect measurement of sleep regularity: ‘scrambling’ the order of days; daily vs. weekly variation; naps; awakenings; ‘all-nighters’; and length of study. Results SJL measured weekly but not daily changes. Scrambling did not affect StDev or IS, but did affect CPD and SRI; these metrics, therefore, measure sleep regularity on multi-day and day-to-day timescales, respectively. StDev and CPD did not capture sleep fragmentation. IS and SRI behaved similarly in response to naps and awakenings but differed markedly for all-nighters. StDev and IS required over a week of sleep-wake data for unbiased estimates, whereas CPD and SRI required larger sample sizes to detect group differences. Conclusions Deciding which sleep regularity metric is most appropriate for a given study depends on a combination of the type of data gathered, the study length and sample size, and which aspects of sleep regularity are most pertinent to the research question.


Author(s):  
Benjamin Wassermann ◽  
Nina Korshunova ◽  
Stefan Kollmannsberger ◽  
Ernst Rank ◽  
Gershon Elber

AbstractThis paper proposes an extension of the finite cell method (FCM) to V-rep models, a novel geometric framework for volumetric representations. This combination of an embedded domain approach (FCM) and a new modeling framework (V-rep) forms the basis for an efficient and accurate simulation of mechanical artifacts, which are not only characterized by complex shapes but also by their non-standard interior structure. These types of objects gain more and more interest in the context of the new design opportunities opened by additive manufacturing, in particular when graded or micro-structured material is applied. Two different types of functionally graded materials (FGM) are considered: The first one, multi-material FGM is described using the inherent property of V-rep models to assign different properties throughout the interior of a domain. The second, single-material FGM—which is heterogeneously micro-structured—characterizes the effective material behavior of representative volume elements by homogenization and performs large-scale simulations using the embedded domain approach.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 41
Author(s):  
Tim Jurisch ◽  
Stefan Cantré ◽  
Fokke Saathoff

A variety of studies recently proved the applicability of different dried, fine-grained dredged materials as replacement material for erosion-resistant sea dike covers. In Rostock, Germany, a large-scale field experiment was conducted, in which different dredged materials were tested with regard to installation technology, stability, turf development, infiltration, and erosion resistance. The infiltration experiments to study the development of a seepage line in the dike body showed unexpected measurement results. Due to the high complexity of the problem, standard geo-hydraulic models proved to be unable to analyze these results. Therefore, different methods of inverse infiltration modeling were applied, such as the parameter estimation tool (PEST) and the AMALGAM algorithm. In the paper, the two approaches are compared and discussed. A sensitivity analysis proved the presumption of a non-linear model behavior for the infiltration problem and the Eigenvalue ratio indicates that the dike infiltration is an ill-posed problem. Although this complicates the inverse modeling (e.g., termination in local minima), parameter sets close to an optimum were found with both the PEST and the AMALGAM algorithms. Together with the field measurement data, this information supports the rating of the effective material properties of the applied dredged materials used as dike cover material.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Sign in / Sign up

Export Citation Format

Share Document