scholarly journals Joint Universal Syntactic and Semantic Parsing

2021 ◽  
Vol 9 ◽  
pp. 756-773
Author(s):  
Elias Stengel-Eskin ◽  
Kenton Murray ◽  
Sheng Zhang ◽  
Aaron Steven White ◽  
Benjamin Van Durme

While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other. This trade-off contradicts the large body of research focusing on the rich interactions at the syntax–semantics interface. We explore multiple model architectures that allow us to exploit the rich syntactic and semantic annotations contained in the Universal Decompositional Semantics (UDS) dataset, jointly parsing Universal Dependencies and UDS to obtain state-of-the-art results in both formalisms. We analyze the behavior of a joint model of syntax and semantics, finding patterns supported by linguistic theory at the syntax–semantics interface. We then investigate to what degree joint modeling generalizes to a multilingual setting, where we find similar trends across 8 languages.

2016 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Jan-erik Lane

<p><em>The COP21 Agreement harbours a conflict between Third </em><em>w</em><em>orld and First world countries that has cropped up in tensions in all meetings by the UNFCCC. On the one hand, there is the catch-up set of countries—emerging economies—that have recently “taken off” economically and that will not accept a trade-off between economic development and environmental need of cutting emissions. On the other hand, there is the set of mature economies that grow sluggishly and have started to cut back on fossil fuels, especially coal. The first set of nations want the second set to pay for their gigantic energy transformation in a few decades—decarbonisation. The first set claimed that they had not created the big problem originally, and that fairness requires that the rich help the poor. At the COP21 summit, a deal was struck, worth 100 billion dollars per year to fund a Stern (2007) like Super Fund. But will it really be put in place and made operational?</em></p>


2020 ◽  
Vol 38 (3-4) ◽  
pp. 1-30
Author(s):  
Rakesh Kumar ◽  
Boris Grot

The front-end bottleneck is a well-established problem in server workloads owing to their deep software stacks and large instruction footprints. Despite years of research into effective L1-I and BTB prefetching, state-of-the-art techniques force a trade-off between metadata storage cost and performance. Temporal Stream prefetchers deliver high performance but require a prohibitive amount of metadata to accommodate the temporal history. Meanwhile, BTB-directed prefetchers incur low cost by using the existing in-core branch prediction structures but fall short on performance due to BTB’s inability to capture the massive control flow working set of server applications. This work overcomes the fundamental limitation of BTB-directed prefetchers, which is capturing a large control flow working set within an affordable BTB storage budget. We re-envision the BTB organization to maximize its control flow coverage by observing that an application’s instruction footprint can be mapped as a combination of its unconditional branch working set and, for each unconditional branch, a spatial encoding of the cache blocks around the branch target. Effectively capturing a map of the application’s instruction footprint in the BTB enables highly effective BTB-directed prefetching that outperforms the state-of-the-art prefetchers by up to 10% for equivalent storage budget.


Britannia ◽  
2011 ◽  
Vol 42 ◽  
pp. 167-242 ◽  
Author(s):  
Steven Willis

AbstractSamian ware being widely present, of striking quality, and highly useful to the archaeologist has a special position within Roman studies. This article brings a large body of samian ware data together to explore the nature of its incidence at settlements and in graves. The examination shows how the nature of samian ware distribution is highly structured between different types of site and between different consumers. This is shown to be so in the case of both Britain and the other Western provinces. The findings raise issues around the use of samian ware in society and point the way to harnessing the rich potential of samian as a source of information as understanding of its utility for the archaeologist expands.


2020 ◽  
Vol 4 (3) ◽  
pp. 42-60
Author(s):  
Mehdi Imani ◽  
Maaruf Ali ◽  
Hamid R. Arabnia

The discovery of neighbouring active nodes is one of the most challenging problems in asynchronous ad hoc networks. Since time synchronization is extremely costly in these networks, application of asynchronous methods like quorum-based protocols have attracted increased interest for their suitability. This is because Quorum-based protocols can guarantee that two nodes with differing clock times have an intersection within at least one timeslot. A higher neighbour discovery rate of active nodes is desired, but it also results in a higher active ratio and consequently and adversely more overall power consumption of the nodes and a shorter network lifetime. There must be a trade-off between extensive neighbour discovery and active ratio in order to design high-performance and efficient protocols. In this paper, two novel asynchronous quorum-based protocols to maximize the neighbour discovery and minimize the active ratio have been designed and presented. A new metric (Quorum Efficiency Ratio: QER) has also been designed to evaluate and compare the performance of quorum-based protocols in terms of their neighbour discovery (the Expected Quorum Overlap Size: EQOS) and the active ratio. The EQOS has been theoretically derived, along with the Active Ratio and the QER values for the proposed novel protocols and the other contemporary protocols. Finally, the proposed methods have been evaluated and compared against the other methods based on the current metrics and the new metric.


Author(s):  
Andreas Lund ◽  
Zain Alabedin Haj Hammadeh ◽  
Patrick Kenny ◽  
Vishav Vishav ◽  
Andrii Kovalov ◽  
...  

AbstractDesigning on-board computers (OBC) for future space missions is determined by the trade-off between reliability and performance. Space applications with higher computational demands are not supported by currently available, state-of-the-art, space-qualified computing hardware, since their requirements exceed the capabilities of these components. Such space applications include Earth observation with high-resolution cameras, on-orbit real-time servicing, as well as autonomous spacecraft and rover missions on distant celestial bodies. An alternative to state-of-the-art space-qualified computing hardware is the use of commercial-off-the-shelf (COTS) components for the OBC. Not only are these components cheap and widely available, but they also achieve high performance. Unfortunately, they are also significantly more vulnerable to errors induced by radiation than space-qualified components. The ScOSA (Scalable On-board Computing for Space Avionics) Flight Experiment project aims to develop an OBC architecture which avoids this trade-off by combining space-qualified radiation-hardened components (the reliable computing nodes, RCNs) together with COTS components (the high performance nodes, HPNs) into a single distributed system. To abstract this heterogeneous architecture for the application developers, we are developing a middleware for the aforementioned OBC architecture. Besides providing an monolithic abstraction of the distributed system, the middleware shall also enhance the architecture by providing additional reliability and fault tolerance. In this paper, we present the individual components comprising the middleware, alongside the features the middleware offers. Since the ScOSA Flight Experiment project is a successor of the OBC-NG and the ScOSA projects, its middleware is also a further development of the existing middleware. Therefore, we will present and discuss our contributions and plans for enhancement of the middleware in the course of the current project. Finally, we will present first results for the scalability of the middleware, which we obtained by conducting software-in-the-loop experiments of different sized scenarios.


2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-25
Author(s):  
Elham Shamsa ◽  
Alma Pröbstl ◽  
Nima TaheriNejad ◽  
Anil Kanduri ◽  
Samarjit Chakraborty ◽  
...  

Smartphone users require high Battery Cycle Life (BCL) and high Quality of Experience (QoE) during their usage. These two objectives can be conflicting based on the user preference at run-time. Finding the best trade-off between QoE and BCL requires an intelligent resource management approach that considers and learns user preference at run-time. Current approaches focus on one of these two objectives and neglect the other, limiting their efficiency in meeting users’ needs. In this article, we present UBAR, User- and Battery-aware Resource management, which considers dynamic workload, user preference, and user plug-in/out pattern at run-time to provide a suitable trade-off between BCL and QoE. UBAR personalizes this trade-off by learning the user’s habits and using that to satisfy QoE, while considering battery temperature and State of Charge (SOC) pattern to maximize BCL. The evaluation results show that UBAR achieves 10% to 40% improvement compared to the existing state-of-the-art approaches.


Author(s):  
Alexandru-Lucian Georgescu ◽  
Alessandro Pappalardo ◽  
Horia Cucu ◽  
Michaela Blott

AbstractThe last decade brought significant advances in automatic speech recognition (ASR) thanks to the evolution of deep learning methods. ASR systems evolved from pipeline-based systems, that modeled hand-crafted speech features with probabilistic frameworks and generated phone posteriors, to end-to-end (E2E) systems, that translate the raw waveform directly into words using one deep neural network (DNN). The transcription accuracy greatly increased, leading to ASR technology being integrated into many commercial applications. However, few of the existing ASR technologies are suitable for integration in embedded applications, due to their hard constrains related to computing power and memory usage. This overview paper serves as a guided tour through the recent literature on speech recognition and compares the most popular ASR implementations. The comparison emphasizes the trade-off between ASR performance and hardware requirements, to further serve decision makers in choosing the system which fits best their embedded application. To the best of our knowledge, this is the first study to provide this kind of trade-off analysis for state-of-the-art ASR systems.


Sign in / Sign up

Export Citation Format

Share Document