Review based recommendation utilizes both users’ rating records and the associated reviews for recommendation. Recently, with the rapid demand for explanations of recommendation results, reviews are used to train the encoder–decoder models for explanation text generation. As most of the reviews are general text without detailed evaluation, some researchers leveraged auxiliary information of users or items to enrich the generated explanation text. Nevertheless, the auxiliary data is not available in most scenarios and may suffer from data privacy problems. In this article, we argue that the reviews contain abundant semantic information to express the users’ feelings for various aspects of items, while these information are not fully explored in current explanation text generation task. To this end, we study how to generate more fine-grained explanation text in review based recommendation without any auxiliary data. Though the idea is simple, it is non-trivial since the aspect is hidden and unlabeled. Besides, it is also very challenging to inject aspect information for generating explanation text with noisy review input. To solve these challenges, we first leverage an advanced unsupervised neural aspect extraction model to learn the aspect-aware representation of each review sentence. Thus, users and items can be represented in the aspect space based on their historical associated reviews. After that, we detail how to better predict ratings and generate explanation text with the user and item representations in the aspect space. We further dynamically assign review sentences which contain larger proportion of aspect words with larger weights to control the text generation process, and jointly optimize rating prediction accuracy and explanation text generation quality with a multi-task learning framework. Finally, extensive experimental results on three real-world datasets demonstrate the superiority of our proposed model for both recommendation accuracy and explainability.
Emotional dialogue generation aims to generate appropriate responses that are content relevant with the query and emotion consistent with the given emotion tag. Previous work mainly focuses on incorporating emotion information into the sequence to sequence or conditional variational auto-encoder (CVAE) models, and they usually utilize the given emotion tag as a conditional feature to influence the response generation process. However, emotion tag as a feature cannot well guarantee the emotion consistency between the response and the given emotion tag. In this article, we propose a novel Dual-View CVAE model to explicitly model the content relevance and emotion consistency jointly. These two views gather the emotional information and the content-relevant information from the latent distribution of responses, respectively. We jointly model the dual-view via VAE to get richer and complementary information. Extensive experiments on both English and Chinese emotion dialogue datasets demonstrate the effectiveness of our proposed Dual-View CVAE model, which significantly outperforms the strong baseline models in both aspects of content relevance and emotion consistency.
Sequential recommenders capture dynamic aspects of users’ interests by modeling sequential behavior. Previous studies on sequential recommendations mostly aim to identify users’ main recent interests to optimize the recommendation accuracy; they often neglect the fact that users display multiple interests over extended periods of time, which could be used to improve the diversity of lists of recommended items. Existing work related to diversified recommendation typically assumes that users’ preferences are static and depend on post-processing the candidate list of recommended items. However, those conditions are not suitable when applied to sequential recommendations. We tackle sequential recommendation as a list generation process and propose a unified approach to take accuracy as well as diversity into consideration, called
multi-interest, diversified, sequential recommendation
. Particularly, an implicit interest mining module is first used to mine users’ multiple interests, which are reflected in users’ sequential behavior. Then an interest-aware, diversity promoting decoder is designed to produce recommendations that cover those interests. For training, we introduce an interest-aware, diversity promoting loss function that can supervise the model to learn to recommend accurate as well as diversified items. We conduct comprehensive experiments on four public datasets and the results show that our proposal outperforms state-of-the-art methods regarding diversity while producing comparable or better accuracy for sequential recommendation.
Abstract: In this paper we attempt to explain and establish certain frameworks that can be assessed for implementing security systems against cyber-threats and cyber-criminals. We give a brief overview of electronic signature generation procedures which include its validation and efficiency for promoting cyber security for confidential documents and information stored in the cloud. We strictly avoid the mathematical modelling of the electronic signature generation process as it is beyond the scope of this paper, instead we take a theoretical approach to explain the procedures. We also model the threats posed by a malicious hacker seeking to induce disturbances in the functioning of a power transmission grid via the means of cyber-physical networks and systems. We use the strategy of a load redistribution attack, while clearly acknowledging that the hacker would form its decision policy on inadequate information. Our research indicate that inaccurate admittance values often cause moderately invasive cyber-attacks that still compromise the grid security, while inadequate capacity values result in comparatively less efficient attacks. In the end we propose a security framework for the security systems utilised by companies and corporations at global scale to conduct cyber-security related operations. Keywords: Electronic signature, Key pair, sequence modelling, hacker, power transmission grid, Threat response, framework.
Abstract. This article reviews river flood generation processes and flow paths across space scales. The scale steps include the pore, profile, hillslope, catchment, regional and continental scales, representing a scale range of a total of 10 orders of magnitude. Although the processes differ between the scales, there are notable similarities. At all scales, there are media patterns that control the flow of water, and are themselves influenced by the flow of water. The processes are therefore not spatially random (as in thermodynamics) but organised, and preferential flow is the rule rather than the exception. Hydrological connectivity, i.e. the presence of coherent flow paths, is an essential characteristic at all scales. There are similar controls on water flow and thus on flood generation at all scales, however, with different relative magnitudes. Processes at lower scales affect flood generation at the larger scales not simply as a multiple repetition of pore scale processes, but through interactions, which cause emergent behaviour of process patterns. For this reason, when modelling these processes, the scale transitions need to be simplified in a way that reflects the relevant structures (e.g. connectivity) and boundary conditions (e.g. groundwater table) at each scale. In conclusion, it is argued that upscaling as the mere multiple application of small scale process descriptions will not capture the larger scale patterns of flood generation. Instead, there is a need to learn from observed patterns of flood generation processes at all spatial scales.
Smart grids (SGs) have as one of their basic proposals to incorporate intelligence into the electric grid through computing and communication technologies aiming at greater efficiency and effectiveness in their operation and control. Power loss, quality, and failures are inherent in the generation process, transmission, and distribution of electricity and, in the context of SGs, should be minimized to ensure greater resilience and system efficiency. Dynamic and efficient distribution network reconfiguration is an example of an SG functionality. The reconfiguration process consists of adjusting or changing the topology of the distribution network from the opening and closing of switches to minimize technical losses, optimize operating parameters, and restore power supply in contingency situations. The nature of the network reconfiguration problem is combinatorial, complex, and non-linear. Aiming to minimize convergence time in search of a solution in medium and large topologies, heuristic and optimization techniques are an alternative. This dissertation proposes a new genetic algorithm, GAEnhanced (Genetic Algorithm Enhanced), to solve network reconfiguration and make a comparative study of performance aspects of this algorithm in relation to other solutions and algorithmic strategies used. The main goal is to evaluate the algorithm implementation strategies for dynamic reconfiguration and on-the-fly distribution networks from a broader perspective, in addition to proposing a new solution with the GAEnhanced algorithm. A simulator (DNRSim) with basic functionalities for implementation and tests of network reconfiguration algorithms for the Smart Grid was developed within the scope of this dissertation. The comparative study of the performance of the GAEnhanced algorithm and other solutions with the DNRSim uses the IEEE models for system tests (14-bus, 30-bus, 57-bus, 118-bus, and 330-bus). The comparative study results illustrate the different ways to efficiently compute network reconfiguration solutions (scalability, time, and quality) and demonstrate the feasibility of using the GAEnhanced algorithm in the context of Smart Grids in a perspective of deploying more autonomic and intelligent solutions.
Development of biotherapeutics require pharmacokinetic/pharmacodynamic (PK/PD) and immunogenicity assays that are frequently in a ligand-binding assay (LBA) format. Conjugated critical reagents for LBAs are generated conjugation of the biotherapeutic drug or anti-drug molecule with a label. Since conjugated critical reagent quality impacts LBA performance, control of the generation process is essential. Our perspective is that process development methodologies should be integrated into critical reagent production to understand the impact of conjugation reactions, purification techniques and formulation conditions on the quality of the reagent. In this article, case studies highlight our approach to developing process conditions for different molecular classes of critical reagents including antibodies and a peptide. This development approach can be applied to the generation of future conjugated critical reagents.
AbstractSearch-based test generation is guided by feedback from one or more fitness functions—scoring functions that judge solution optimality. Choosing informative fitness functions is crucial to meeting the goals of a tester. Unfortunately, many goals—such as forcing the class-under-test to throw exceptions, increasing test suite diversity, and attaining Strong Mutation Coverage—do not have effective fitness function formulations. We propose that meeting such goals requires treating fitness function identification as a secondary optimization step. An adaptive algorithm that can vary the selection of fitness functions could adjust its selection throughout the generation process to maximize goal attainment, based on the current population of test suites. To test this hypothesis, we have implemented two reinforcement learning algorithms in the EvoSuite unit test generation framework, and used these algorithms to dynamically set the fitness functions used during generation for the three goals identified above. We have evaluated our framework, EvoSuiteFIT, on a set of Java case examples. EvoSuiteFIT techniques attain significant improvements for two of the three goals, and show limited improvements on the third when the number of generations of evolution is fixed. Additionally, for two of the three goals, EvoSuiteFIT detects faults missed by the other techniques. The ability to adjust fitness functions allows strategic choices that efficiently produce more effective test suites, and examining these choices offers insight into how to attain our testing goals. We find that adaptive fitness function selection is a powerful technique to apply when an effective fitness function does not already exist for achieving a testing goal.
AbstractGeneration and control of humidity in a testing environment is crucial when evaluating a chemical vapor sensor as water vapor in the air can not only interfere with the sensor itself, but also react with a chemical analyte changing its composition. Upon constructing a split-flow humidity generator for chemical vapor sensor development, numerous issues were observed due to instability of the generated relative humidity level and drift of the humidity over time. By first fixing the initial relative humidity output of the system at 50%, we studied the effects of flowrate on stabilization time along with long term stability for extended testing events. It was found that the stabilization time can be upwards of 7 h, but can be maintained for greater than 90 h allowing for extended experiments. Once the stabilization time was known for 50% relative humidity output, additional studies at differing humidity levels and flowrates were performed to better characterize the system. At a relative humidity of 20% there was no time required to stabilize, but when increased to 80% this time increased to over 4 h. With this information we were better able to understand the generation process and characterize the humidity generation system, output stabilization and possible modifications to limit future testing issues.