scholarly journals Multiobjective multitasking optimization assisted by multidirectional prediction method

Author(s):  
Qianlong Dang ◽  
Weifeng Gao ◽  
Maoguo Gong

AbstractMultiobjective multitasking optimization (MTO) is an emerging research topic in the field of evolutionary computation, which has attracted extensive attention, and many evolutionary multitasking (EMT) algorithms have been proposed. One of the core issues, designing an efficient transfer strategy, has been scarcely explored. Keeping this in mind, this paper is the first attempt to design an efficient transfer strategy based on multidirectional prediction method. Specifically, the population is divided into multiple classes by the binary clustering method, and the representative point of each class is calculated. Then, an effective prediction direction method is developed to generate multiple prediction directions by representative points. Afterward, a mutation strength adaptation method is proposed according to the improvement degree of each class. Finally, the predictive transferred solutions are generated as transfer knowledge by the prediction directions and mutation strengths. By the above process, a multiobjective EMT algorithm based on multidirectional prediction method is presented. Experiments on two MTO test suits indicate that the proposed algorithm is effective and competitive to other state-of-the-art EMT algorithms.

Information ◽  
2018 ◽  
Vol 10 (1) ◽  
pp. 1 ◽  
Author(s):  
Bingkun Wang ◽  
Bing Chen ◽  
Li Ma ◽  
Gaiyun Zhou

With the explosive growth of product reviews, review rating prediction has become an important research topic which has a wide range of applications. The existing review rating prediction methods use a unified model to perform rating prediction on reviews published by different users, ignoring the differences of users within these reviews. Constructing a separate personalized model for each user to capture the user’s personalized sentiment expression is an effective attempt to improve the performance of the review rating prediction. The user-personalized sentiment information can be obtained not only by the review text but also by the user-item rating matrix. Therefore, we propose a user-personalized review rating prediction method by integrating the review text and user-item rating matrix information. In our approach, each user has a personalized review rating prediction model, which is decomposed into two components, one part is based on review text and the other is based on user-item rating matrix. Through extensive experiments on Yelp and Douban datasets, we validate that our methods can significantly outperform the state-of-the-art methods.


Author(s):  
Yu Zhang ◽  
Houquan Zhou ◽  
Zhenghua Li

Estimating probability distribution is one of the core issues in the NLP field. However, in both deep learning (DL) and pre-DL eras, unlike the vast applications of linear-chain CRF in sequence labeling tasks, very few works have applied tree-structure CRF to constituency parsing, mainly due to the complexity and inefficiency of the inside-outside algorithm. This work presents a fast and accurate neural CRF constituency parser. The key idea is to batchify the inside algorithm for loss computation by direct large tensor operations on GPU, and meanwhile avoid the outside algorithm for gradient computation via efficient back-propagation. We also propose a simple two-stage bracketing-then-labeling parsing approach to improve efficiency further. To improve the parsing performance, inspired by recent progress in dependency parsing, we introduce a new scoring architecture based on boundary representation and biaffine attention, and a beneficial dropout strategy. Experiments on PTB, CTB5.1, and CTB7 show that our two-stage CRF parser achieves new state-of-the-art performance on both settings of w/o and w/ BERT, and can parse over 1,000 sentences per second. We release our code at https://github.com/yzhangcs/crfpar.


Author(s):  
Yuta Abe ◽  
Yu-ichi Hayashi ◽  
Takaaki Mizuki ◽  
Hideaki Sone

AbstractIn card-based cryptography, designing AND protocols in committed format is a major research topic. The state-of-the-art AND protocol proposed by Koch, Walzer, and Härtel in ASIACRYPT 2015 uses only four cards, which is the minimum permissible number. The minimality of their protocol relies on somewhat complicated shuffles having non-uniform probabilities of possible outcomes. Restricting the allowed shuffles to uniform closed ones entails that, to the best of our knowledge, six cards are sufficient: the six-card AND protocol proposed by Mizuki and Sone in 2009 utilizes the random bisection cut, which is a uniform and cyclic (and hence, closed) shuffle. Thus, a question has arisen: “Can we improve upon this six-card protocol using only uniform closed shuffles?” In other words, the existence or otherwise of a five-card AND protocol in committed format using only uniform closed shuffles has been one of the most important open questions in this field. In this paper, we answer the question affirmatively by designing five-card committed-format AND protocols using only uniform cyclic shuffles. The shuffles that our protocols use are the random cut and random bisection cut, both of which are uniform cyclic shuffles and can be easily implemented by humans.


Author(s):  
Sakiko Fukuda-Parr ◽  
Thea Smaavik Hegstad

Abstract One of the most important elements of the 2030 Agenda and the SDGs is the strong commitment to inclusive development, and “leaving no one behind” has emerged as a central theme of the agenda. How did this consensus come about? And what does this term mean and how is it being interpreted? This matters because SDGs shift international norms. Global goals exert influence on policy and action of governments and stakeholders in development operates through discourse. So the language used in formulating the UN Agenda is a terrain of active contestation. This paper aims to explain the politics that led to this term as a core theme. It argues that LNOB was promoted to frame the SDG inequality agenda as inclusive development, focusing on the exclusion of marginalized and vulnerable groups from social opportunities, deflecting attention from the core issues of distribution of income and wealth, and the challenge of “extreme inequality.” The term is adequately vague so as to accommodate wide ranging interpretations. Through a content analysis of LNOB in 43 VNRs, the paper finds that the majority of country strategies identify LNOB as priority to the very poor, and identify it with a strategy for social protection. This narrow interpretation does not respond to the ambition of the 2030 Agenda for transformative change, and the principles of human rights approaches laid out.


2021 ◽  
pp. 1-22
Author(s):  
Qiang Zha

Abstract This paper examines several research questions relating to equality and equity in Chinese higher education via an extended literature review, which in turn sheds light on evolving scholarly explorations into this theme. First, in the post-massification era, has the Chinese situation of equality and equity in higher education improved or deteriorated since the late 1990s? Second, what are the core issues with respect to equality and equity in Chinese higher education? Third, how have those core issues evolved or changed over time and what does the evolution indicate and entail? Methodologically, this paper uses a bibliometric analysis to detect the topical hotspots in scholarly literature and their changes over time. The study then investigates each of those topical terrains against their temporal contexts in order to gain insights into the core issues.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2009 ◽  
Vol 51 (3) ◽  
pp. 563-589 ◽  
Author(s):  
Raf Gelders

In the aftermath of Edward Said's Orientalism (1978), European representations of Eastern cultures have returned to preoccupy the Western academy. Much of this work reiterates the point that nineteenth-century Orientalist scholarship was a corpus of knowledge that was implicated in and reinforced colonial state formation in India. The pivotal role of native informants in the production of colonial discourse and its subsequent use in servicing the material adjuncts of the colonial state notwithstanding, there has been some recognition in South Asian scholarship of the moot point that the colonial constructs themselves built upon an existing, precolonial European discourse on India and its indigenous culture. However, there is as yet little scholarly consensus or indeed literature on the core issues of how and when these edifices came to be formed, or the intellectual and cultural axes they drew from. This genealogy of colonial discourse is the subject of this essay. Its principal concerns are the formalization of a conceptual unit in the sixteenth and seventeenth centuries, called “Hinduism” today, and the larger reality of European culture and religion that shaped the contours of representation.


2021 ◽  
Vol 7 (4) ◽  
pp. 1-24
Author(s):  
Douglas Do Couto Teixeira ◽  
Aline Carneiro Viana ◽  
Jussara M. Almeida ◽  
Mrio S. Alvim

Predicting mobility-related behavior is an important yet challenging task. On the one hand, factors such as one’s routine or preferences for a few favorite locations may help in predicting their mobility. On the other hand, several contextual factors, such as variations in individual preferences, weather, traffic, or even a person’s social contacts, can affect mobility patterns and make its modeling significantly more challenging. A fundamental approach to study mobility-related behavior is to assess how predictable such behavior is, deriving theoretical limits on the accuracy that a prediction model can achieve given a specific dataset. This approach focuses on the inherent nature and fundamental patterns of human behavior captured in that dataset, filtering out factors that depend on the specificities of the prediction method adopted. However, the current state-of-the-art method to estimate predictability in human mobility suffers from two major limitations: low interpretability and hardness to incorporate external factors that are known to help mobility prediction (i.e., contextual information). In this article, we revisit this state-of-the-art method, aiming at tackling these limitations. Specifically, we conduct a thorough analysis of how this widely used method works by looking into two different metrics that are easier to understand and, at the same time, capture reasonably well the effects of the original technique. We evaluate these metrics in the context of two different mobility prediction tasks, notably, next cell and next distinct cell prediction, which have different degrees of difficulty. Additionally, we propose alternative strategies to incorporate different types of contextual information into the existing technique. Our evaluation of these strategies offer quantitative measures of the impact of adding context to the predictability estimate, revealing the challenges associated with doing so in practical scenarios.


Sign in / Sign up

Export Citation Format

Share Document