complexity measure
Recently Published Documents


TOTAL DOCUMENTS

383
(FIVE YEARS 55)

H-INDEX

29
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Chris Gu ◽  
Yike Wang

Modern-day search platforms generally have two layers of information presentation. The outer layer displays the collection of search results with attributes selected by platforms, and consumers click on a product to reveal all its attributes in the inner layer. The information revealed in the outer layer affects the search costs and the probability of finding a match. To address the managerial question of optimal information layout, we create an information complexity measure of the outer layer, namely orderedness entropy, and study the consumer search process for information at the expense of time and cognitive costs. We first conduct online random experiments to show that consumers respond to and actively reduce cognitive cost for which our information complexity measure provides a representation. Then, using a unique and rich panel tracking consumer search behaviors at a large online travel agency (OTA), we specify a novel sequential search model that jointly describes the refinement search and product clicking decisions. We find that cognitive cost is a major component of search cost, while loading time cost has a much smaller share. By varying the information revealed in the outer layer, we propose information layouts that Pareto-improve both revenue and consumer welfare for our OTA. This paper was accepted by Juanjuan Zhang, marketing.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124003
Author(s):  
Preetum Nakkiran ◽  
Gal Kaplun ◽  
Yamini Bansal ◽  
Tristan Yang ◽  
Boaz Barak ◽  
...  

Abstract We show that a variety of modern deep learning tasks exhibit a ‘double-descent’ phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.


2021 ◽  
Author(s):  
Remo Gresta ◽  
Elder Cirilo

Identifiers represent approximately 2/3 of the elements in source code, and their names directly impact code comprehension. Indeed, intention-revealing names make code easier to understand, especially in code review sessions, where developers examine each other's code for mistakes. However, we argue that names should be understandable and pronounceable to enable developers to review and discuss code effectively. Therefore, we carried out an empirical study based on 40 open-source projects to explore the naming practices of developers concerning word complexity and pronounceability. We applied the Word Complexity Measure (WCM) to discover complex names; and analyzed the phonetic similarity among names and hard-to-pronounce English words. As a result, we observed that most of the analyzed names are somewhat composed of hard-to-pronounce words. The overall word complexity score of the projects also tends to be significant. Finally, the results show that the code location impacts the word complexity: names in small scopes tend to be simpler than names declared in large scopes.


2021 ◽  
Author(s):  
Yana Nehme ◽  
Mona Abid ◽  
Guillaume Lavoue ◽  
Matthieu Perreira Da Silva ◽  
Patrick Le Callet

Author(s):  
Sally G. Haskell ◽  
Ling Han ◽  
Erica A. Abel ◽  
Lori Bastian ◽  
Mary Driscoll ◽  
...  

Author(s):  
Maria-Florina Balcan ◽  
Siddharth Prasad ◽  
Tuomas Sandholm

We develop a new framework for designing truthful, high-revenue (combinatorial) auctions for limited supply. Our mechanism learns within an instance. It generalizes and improves over previously-studied random-sampling mechanisms. It first samples a participatory group of bidders, then samples several learning groups of bidders from the remaining pool of bidders, learns a high-revenue auction from the learning groups, and finally runs that auction on the participatory group. Previous work on random-sampling mechanisms focused primarily on unlimited supply. Limited supply poses additional significant technical challenges, since allocations of items to bidders must be feasible. We prove guarantees on the performance of our mechanism based on a market-shrinkage term and a new complexity measure we coin partition discrepancy. Partition discrepancy simultaneously measures the intrinsic complexity of the mechanism class and the uniformity of the set of bidders. We then introduce new auction classes that can be parameterized in a way that does not depend on the number of bidders participating, and prove strong guarantees for these classes. We show how our mechanism can be implemented efficiently by leveraging practically-efficient routines for solving winner determination. Finally, we show how to use structural revenue maximization to decide what auction class to use with our framework when there is a constraint on the number of learning groups.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ellen Marklund ◽  
Ulrika Marklund ◽  
Lisa Gustavsson

Extreme or exaggerated articulation of vowels, or vowel hyperarticulation, is a characteristic commonly found in infant-directed speech (IDS). High degrees of vowel hyperarticulation in parent IDS has been tied to better speech sound category development and bigger vocabulary size in infants. In the present study, the relationship between vowel hyperarticulation in Swedish IDS to 12-month-old and phonetic complexity of infant vocalizations is investigated. Articulatory adaptation toward hyperarticulation is quantified as difference in vowel space area between IDS and adult-directed speech (ADS). Phonetic complexity is estimated using the Word Complexity Measure for Swedish (WCM-SE). The results show that vowels in IDS was more hyperarticulated than vowels in ADS, and that parents’ articulatory adaptation in terms of hyperarticulation correlates with phonetic complexity of infant vocalizations. This can be explained either by the parents’ articulatory behavior impacting the infants’ vocalization behavior, the infants’ social and communicative cues eliciting hyperarticulation in the parents’ speech, or the two variables being impacted by a third, underlying variable such as parents’ general communicative adaptiveness.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-21
Author(s):  
Suryajith Chillara

In this article, we are interested in understanding the complexity of computing multilinear polynomials using depth four circuits in which the polynomial computed at every node has a bound on the individual degree of r ≥ 1 with respect to all its variables (referred to as multi- r -ic circuits). The goal of this study is to make progress towards proving superpolynomial lower bounds for general depth four circuits computing multilinear polynomials, by proving better bounds as the value of r increases. Recently, Kayal, Saha and Tavenas (Theory of Computing, 2018) showed that any depth four arithmetic circuit of bounded individual degree r computing an explicit multilinear polynomial on n O (1) variables and degree d must have size at least ( n / r 1.1 ) Ω(√ d / r ) . This bound, however, deteriorates as the value of r increases. It is a natural question to ask if we can prove a bound that does not deteriorate as the value of r increases, or a bound that holds for a larger regime of r . In this article, we prove a lower bound that does not deteriorate with increasing values of r , albeit for a specific instance of d = d ( n ) but for a wider range of r . Formally, for all large enough integers n and a small constant η, we show that there exists an explicit polynomial on n O (1) variables and degree Θ (log 2 n ) such that any depth four circuit of bounded individual degree r ≤ n η must have size at least exp(Ω(log 2 n )). This improvement is obtained by suitably adapting the complexity measure of Kayal et al. (Theory of Computing, 2018). This adaptation of the measure is inspired by the complexity measure used by Kayal et al. (SIAM J. Computing, 2017).


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 832
Author(s):  
Marta Borowska

This paper analyses the complexity of electroencephalogram (EEG) signals in different temporal scales for the analysis and classification of focal and non-focal EEG signals. Futures from an original multiscale permutation Lempel–Ziv complexity measure (MPLZC) were obtained. MPLZC measure combines a multiscale structure, ordinal analysis, and permutation Lempel–Ziv complexity for quantifying the dynamic changes of an electroencephalogram (EEG). We also show the dependency of MPLZC on several straight-forward signal processing concepts, which appear in biomedical EEG activity via a set of synthetic signals. The main material of the study consists of EEG signals, which were obtained from the Bern-Barcelona EEG database. The signals were divided into two groups: focal EEG signals (n = 100) and non-focal EEG signals (n = 100); statistical analysis was performed by means of non-parametric Mann–Whitney test. The mean value of MPLZC results in the non-focal group are significantly higher than those in the focal group for scales above 1 (p < 0.05). The result indicates that the non-focal EEG signals are more complex. MPLZC feature sets are used for the least squares support vector machine (LS-SVM) classifier to classify into the focal and non-focal EEG signals. Our experimental results confirmed the usefulness of the MPLZC method for distinguishing focal and non-focal EEG signals with a classification accuracy of 86%.


Sign in / Sign up

Export Citation Format

Share Document