To breed or not: a novel approach to estimate breeding propensity and potential trade-offs in an Arctic-nesting species

Ecology ◽  
2014 ◽  
Vol 95 (10) ◽  
pp. 2745-2756 ◽  
Author(s):  
Guillaume Souchay ◽  
Gilles Gauthier ◽  
Roger Pradel
2015 ◽  
Vol 119 (1217) ◽  
pp. 833-854
Author(s):  
L. Cameron ◽  
J. Early ◽  
R. McRoberts ◽  
M. Price

AbstractA novel approach for the multi-objective design optimisation of aerofoil profiles is presented. The proposed method aims to exploit the relative strengths of global and local optimisation algorithms, whilst using surrogate models to limit the number of computationally expensive CFD simulations required. The local search stage utilises a re-parameterisation scheme that increases the flexibility of the geometry description by iteratively increasing the number of design variables, enabling superior designs to be generated with minimal user intervention. Capability of the algorithm is demonstrated via the conceptual design of aerofoil sections for use on a lightweight laminar flow business jet. The design case is formulated to account for take-off performance while reducing sensitivity to leading edge contamination. The algorithm successfully manipulates boundary layer transition location to provide a potential set of aerofoils that represent the trade-offs between drag at cruise and climb conditions in the presence of a challenging constraint set. Variations in the underlying flow physics between Pareto-optimal aerofoils are examined to aid understanding of the mechanisms that drive the trade-offs in objective functions.


2021 ◽  
Vol 247 ◽  
pp. 20004
Author(s):  
C. A. Manring ◽  
A. I. Hawari

Modern multi-physics codes, often employed in the simulation and development of thermal nuclear systems, depend heavily on thermal neutron interaction data to determine the space-time distribution of fission events. Therefore, the computationally expensive analysis of such systems motivates the advancement of thermal scattering law (TSL) data delivery methods. Despite considerable improvements on past strategies, current implementations are limited by trade-offs between speed, accuracy, and memory allocation. Furthermore, many of these implementations are not easily adaptable to additional input parameters (e.g., temperature), relying instead on various interpolation schemes. In this work, a novel approach to this problem is demonstrated with a neural network trained on beryllium oxide thermal scattering data generated by the FLASSH nuclear data code of the Low Energy Interaction Physics (LEIP) group at North Carolina State University. Using open-source deep learning libraries, this approach maps a unique functional form to the S(α,β,T) probability distribution function, providing a continuous representation of the TSL across the input phase space. For a given material, the result is a highly accurate, neural thermal scattering (NeTS) module that enables rapid sampling and execution with minimal memory requirements. Moreover, extension of the NeTS phase space to other parameters of interest (e.g., pressure, radiation damage) is highly possible. Consequently, NeTS modules for different materials under various conditions can be stored together in material “lockers” and accessed on-the-fly to generate problem specific cross-sections.


2016 ◽  
Vol 7 (2) ◽  
pp. 190-224 ◽  
Author(s):  
Abdifatah Ahmed Haji ◽  
Mutalib Anifowose

Purpose The purpose of this paper is to examine the trend of integrated reporting (IR) practice following the introduction of an “apply or explain” IR requirement in South Africa. In particular, the authors examine whether the IR practice is ceremonial or substantive in the context of a soft regulatory environment. Design/methodology/approach By way of content analyses, the authors examine the extent and quality of IR practice using an IR checklist developed based on normative understanding of existing IR guidelines. The evidence is drawn from 246 integrated reports of large South African companies over a three-year period (2011-2013), following the introduction of IR requirement in South Africa. Findings The results show a significant increase in the extent and quality of IR practice. The findings also reveal significant improvements in individual IR categories such as connectivity of information, materiality determination process and reliability and completeness of the integrated reports. However, despite the increasing trend and evidence of both symbolic and substantive IR practice, the authors conclude that the current IR practice is largely ceremonial in nature, produced to acquire organisational legitimacy. Practical implications For academics, the authors argue that there is a need to move away from the “what” and “why” aspects of the IR agenda to “how” IR should work inside organisations. In particular, academics should engage with firms through interventionist research to help firms implement integrated thinking and substantive reporting practices. For organisations, the findings draw attention to specific aspects of IR that require improvement. For policymakers, the study provides evidence based on the developmental stage of IR practice and draws attention to certain areas that need clarification. In particular, the International Integrated Reporting Council and Integrated Reporting Committee of South Africa should provide detailed guidelines on connectivity of information, material issues and disclosure of multiple capitals and their trade-offs. Finally, for educators, in line with the ACCA’s embedment of IR in its accounting courses, there is a need to incorporate IR in the curriculum; in particular, the authors argue that the best way to advance IR is in a “ubiquitous” spread in accounting and management courses. Originality/value This study provides empirical account of IR practice over time in the context of a regulatory IR environment. The construction of an IR checklist developed based on normative understanding of local and international IR guidelines is another novel approach of this study.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Darius Sas ◽  
Paris Avgeriou ◽  
Ronald Kruizinga ◽  
Ruben Scheedler

AbstractThe interplay between Maintainability and Reliability can be particularly complex and different kinds of trade-offs may arise when developers try to optimise for either one of these two qualities. To further understand how Maintainability and Reliability influence each other, we perform an empirical study using architectural smells and source code file co-changes as proxies for these two qualities, respectively. The study is designed using an exploratory multiple-case case study following well-know guidelines and using fourteen open source Java projects. Three different research questions are identified and investigated through statistical analysis. Co-changes are detected by using both a state-of-the-art algorithm and a novel approach. The three architectural smells selected are among the most important from the literature and are detected using open source tools. The results show that 50% of co-changes eventually end up taking part in an architectural smell. Moreover, statistical tests indicate that in 50% of the projects, files and packages taking part in smells are more likely to co-change than non-smelly files. Finally, co-changes were also found to appear before smells 90% of the times a smell and a co-change appear in the same file pair. Our findings show that Reliability is indirectly affected by low levels of Maintainability even at the architectural level. This is because low-quality components require more frequent changes by the developers, increasing chances to eventually introduce faults.


Author(s):  
S. González‐Gallardo ◽  
C.O. Henriques ◽  
O.D. Marcenaro‐Gutierrez ◽  
M. Luque
Keyword(s):  

2021 ◽  
Author(s):  
◽  
David Hammer

This thesis presents research which spans three conference papers and one manuscript which has not yet been submitted for peer review. The topic of 1 is the inherent complexity of maintaining perfect height in B-trees. We consider the setting in which a B-tree of optimal height contains n = (1−ϵ)N elements where N is the number of elements in full B-tree of the same height (the capacity of the tree). We show that the rebalancing cost when updating the tree—while maintaining optimal height—depends on ϵ. Specifically, our analysis gives a lower bound for the rebalancing cost of Ω(1/(ϵB)). We then describe a rebalancing algorithm which has an amortized rebalancing cost with an almost matching upper bound of O(1/(ϵB)⋅log²(min{1/ϵ,B})). We additionally describe a scheme utilizing this algorithm which, given a rebalancing budget f(n), maintains optimal height for decreasing ϵ until the cost exceeds the budget at which time it maintains optimal height plus one. Given a rebalancing budget of Θ(logn), this scheme maintains optimal height for all but a vanishing fraction of sizes in the intervals between tree capacities. Manuscript 2 presents empirical analysis of practical randomized external-memory algorithms for computing the connected components of graphs. The best known theoretical results for this problem are essentially all derived from results for minimum spanning tree algorithms. In the realm of randomized external-memory MST algorithms, the best asymptotic result has I/O-complexity O(sort(|E|)) in expectation while an empirically studied practical algorithm has a bound of O(sort(|E|)⋅log(|V|/M)). We implement and evaluate an algorithm for connected components with expected I/O-complexity O(sort(|E|))—a simplification of the MST algorithm with this asymptotic cost, we show that this approach may also yield good results in practice. In paper 3, we present a novel approach to simulating large-scale population protocol models. Naive simulation of N interactions of a population protocol with n agents and m states requires Θ(nlogm) bits of memory and Θ(N) time. For very large n, this is prohibitive both in memory consumption and time, as interesting protocols will typically require N > n interactions for convergence. We describe a histogram-based simulation framework which requires Θ(mlogn) bits of memory instead—an improvement as it is typically the case that n ≫ m. We analyze, implement, and compare a number of different data structures to perform correct agent sampling in this regime. For this purpose, we develop dynamic alias tables which allow sampling an interaction in expected amortized constant time. We then show how to use sampling techniques to process agent interactions in batches, giving a simulation approach which uses subconstant time per interaction under reasonable assumptions. With paper 4, we introduce the new model of fragile complexity for comparison-based algorithms. Within this model, we analyze classical comparison-based problems such as finding the minimum value of a set, selection (or finding the median), and sorting. We prove a number of lower and upper bounds and in particular, we give a number of randomized results which describe trade-offs not achievable by deterministic algorithms.


2022 ◽  
pp. 1-19
Author(s):  
Nökkvi S. Sigurdarson ◽  
Tobias Eifler ◽  
Martin Ebro ◽  
Panos Y. Papalambros

Abstract Configuration (or topology or embodiment) design remains a ubiquitous challenge in product design optimization and in design automation, meaning configuration design is largely driven by experience in industrial practice. In this article, we introduce a novel configuration redesign process founded on the interaction of the designer with results from rigorous multiobjective monotonicity analysis. Guided by Pareto-set dependencies, the designer seeks to reduce trade-offs among objectives or improve optimality overall, deriving redesigns that eliminate dependencies or relax active constraints. The method is demonstrated on an ingestible medical device for oral drug delivery, currently in early concept development.


2018 ◽  
Author(s):  
R. Schuster ◽  
S. Wilson ◽  
A.D. Rodewal ◽  
P. Arcese ◽  
D. Fink ◽  
...  

AbstractLimited knowledge of the distribution, abundance, and habitat associations of migratory species introduces uncertainty about the most effective conservation actions. We used Neotropical migratory birds as a model group to evaluate contrasting approaches to land prioritization to support ≥30% of the global abundances of 117 species throughout the annual cycle in the Western hemisphere. Conservation targets were achieved in 43% less land area in plans based on annual vs. weekly optimizations. Plans agnostic to population structure required comparatively less land area to meet targets, but at the expense of representation. Less land area was also needed to meet conservation targets when human-dominated lands were included rather than excluded from solutions. Our results point to key trade-offs between efforts minimizing the opportunity costs of conservation vs. those ensuring spatiotemporal representation of populations, and demonstrate a novel approach to the conservation of migratory species based on leading-edge abundance models and linear programming to identify portfolios of priority landscapes and inform conservation planners.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 720 ◽  
Author(s):  
Carlos Lara-Nino ◽  
Arturo Diaz-Perez ◽  
Miguel Morales-Sandoval

Making Elliptic Curve Cryptography (ECC) available for the Internet of Things (IoT) and related technologies is a recent topic of interest. Modern IoT applications transfer sensitive information which needs to be protected. This is a difficult task due to the processing power and memory availability constraints of the physical devices. ECC mainly relies on scalar multiplication (kP)—which is an operation-intensive procedure. The broad majority of kP proposals in the literature focus on performance improvements and often overlook the energy footprint of the solution. Some IoT technologies—Wireless Sensor Networks (WSN) in particular—are critically sensitive in that regard. In this paper we explore energy-oriented improvements applied to a low-area scalar multiplication architecture for Binary Edwards Curves (BEC)—selected given their efficiency. The design and implementation costs for each of these energy-oriented techniques—in hardware—are reported. We propose an evaluation method for measuring the effectiveness of these optimizations. Under this novel approach, the energy-reducing techniques explored in this work contribute to achieving the scalar multiplication architecture with the most efficient area/energy trade-offs in the literature, to the best of our knowledge.


2020 ◽  
Vol 12 (20) ◽  
pp. 8729 ◽  
Author(s):  
Håvard Hegre ◽  
Kristina Petrova ◽  
Nina von Uexkull

The Sustainable Development Goals (SDGs) adopted in 2015 integrate diverse issues such as addressing hunger, gender equality and clean energy and set a common agenda for all United Nations member states until 2030. The 17 SDGs interact and by working towards achieving one goal countries may further—or jeopardise—progress on others. However, the direction and strength of these interactions are still poorly understood and it remains an analytical challenge to capture the relationships between the multi-dimensional goals, comprising 169 targets and over 200 indicators. Here, we use principal component analysis (PCA), an in this context novel approach, to summarise each goal and interactions in the global SDG agenda. Applying PCA allows us to map trends, synergies and trade-offs at the level of goals for all SDGs while using all available information on indicators. While our approach does not allow us to investigate causal relationships, it provides important evidence of the degree of compatibility of goal attainment over time. Based on global data 2000–2016, our results indicate that synergies between and within the SDGs prevail, both in terms of levels and over time change. An exception is SDG 10 ‘Reducing inequalities’ which has not progressed in tandem with other goals.


Sign in / Sign up

Export Citation Format

Share Document