intractable problem
Recently Published Documents


TOTAL DOCUMENTS

218
(FIVE YEARS 73)

H-INDEX

17
(FIVE YEARS 5)

Author(s):  
David Samways

At a high level of abstraction, causally connecting population growth and environmental degradation is intuitively appealing. However, while it is clear that population size is a critical factor in the size and power of social systems, and hence in environmental impact, the relationship between human numbers and environmental change is complex. In particular, the long timescales involved in population growth and decline, along with the shifting role of economic development in both population growth itself and environmental impact, obfuscate the role of population size as a multiplier of impact. Moreover, the protracted nature of demographic change makes population size seem like an intractable problem, the outcome of natural processes which are not only beyond choice, but, critically, morally perilous. In this review of the role of population size in environmental impact, I argue that choices, norms, and values, as well as material factors, are interwoven and inseparable in the environmental impact of our species. Furthermore, the consideration of human welfare and wellbeing is central to arguments regarding an environmentally sustainable population.


Author(s):  
Lia van Broekhoven ◽  
Sangeeta Goswami

Abstract Counterterrorism architecture has grown exponentially in the last two decades, with counterterrorism measures impacting humanitarian, development, peacebuilding and human rights action across the world. Addressing and mitigating the impact of these measures take various forms in different contexts, local and global. This article will address one particular form of engagement and redressal – that of the multi-stakeholder dialogue process – to deal with the unintended consequences for civil society of countering the financing of terrorism rules and regulations. The impact is seen in the difficulties that non-profit organizations face across the world in terms of financial access. Involving civil society, banks, government, financial intelligence, regulators, supervisors and banking associations, among others, in a dialogue process with clearly defined objectives is considered by policymakers and civil society to be the most appropriate and effective form of engagement for dealing with and overcoming this particular set of challenges. Multiple examples are provided of ongoing initiatives, with the nuances of each drawn out for a closer look at the conditions needed to sustain such dialogue, and an examination of whether such stakeholder dialogue processes are fit for purpose for solving the seemingly intractable problem at hand.


2021 ◽  
Author(s):  
◽  
Isidro M. Alvarez

<p>Learning is an important activity through which humanity has incrementally improved accomplishing tasks by adapting knowledge and methods based on the related feedback. Although learning is natural to humans, it is much more difficult to achieve in the technological world as tasks are often learned in isolation. Software is capable of learning novel techniques and algorithms in order to solve these basic, individual problems, however transferring said knowledge to other problems in the same or related domains presents challenges. Solutions often cannot be enumerated to discover the best one as many problems of interest can be intractable in terms of the resources needed to successfully complete them. However, many such problems contain key building blocks of knowledge that can be leveraged to achieve a suitable solution. These building blocks encapsulate important structural regularities of the problem. A technique that can learn these regularities without enumeration,may produce general solutions that apply to similar problems of any length. This implies reusing learned information.  In order to reuse learned blocks of knowledge, it is important that a program be scalable and flexible. This requires a program capable of taking knowledge from a previous task and applying it to a more complex problem or a problem with a similar pattern. This is anticipated to enable the program to complete the new task in a practical amount of time and with reasonable amounts of resources.  In machine learning, the degree of human intervention in solving problems is often important in many tasks. It is generally necessary for a human to provide input to direct and improve learning. In the field of Developmental Learning there is the idea known as the Threshold Concept (TC). A TC is transformative information which advocates learning. TCs are important because without them, the learner cannot progress. In addition, TCs need to be learned in a particular order, much like a curriculum, thus providing the student with viable progress towards learning more difficult ideas at a faster pace than otherwise. Therefore, human input to a learning algorithm can be to partition a problem into constituent subproblems. This is a principal concept of Layered Learning (LL),where a sequence of sub-problems are learned. The sub-problems are self-contained stages which have been separated by a human. This technique is necessary for tasks in which learning a direct mapping from inputs to outputs is intractable given existing learning algorithms.  One of the first artificial learning systems developed is Learning Classifier Systems (LCSs). Past work has extended LCSs to provide more expressivity by using richer representations. One such representation is tree-based and is common to the Genetic Programming (GP) technique. GP is part of the Evolutionary Computation (EC) paradigm and produces solutions represented by trees. The tree nodes can contain functions, and leaf nodes problem features, giving GP a rich representation. A more recent technique is Code Fragments (CFs). CFs are GP-like sub-trees with an initial maximum height of two. Initially, CFs contained hard-coded functions at the root nodes and problem features or previously learned CFs at the leaf nodes of the sub-trees. CFs provided improved expressivity and scalability over the original ternary alphabet used by LCSs. Additionally, CF-based systems have successfully learned previously intractable problems, e.g. 135-bit multiplexer.  Although CFs have provided increased scalability, they suffer from a structural weakness. As the problem scales, the chains of CFs grow to intractable lengths. This means that at some point the LCS will stop learning. In addition, CFs were originally meant to scale to more complex problems in the same domain. However, it is advantageous to compile cross-domain solutions, as the regularities of a problem might be from different domains to that expressed by the data.  The proposed thesis is that a CF-based LCS can scale to complex problems by reusing learned solutions of problems as functions at the inner nodes of CFs together with compaction and Layered Learning. The overall goal is divided into the following three sub-goals: reuse learned functionality from smaller problems in the root nodes of CF sub-trees, identify a compaction technique that facilitates reduced solution size for improved evaluation time of CFs and develop a layered learning methodology for a CF system, which will be demonstrated by learning a general solution to an intractable problem, i.e. n-bit Multiplexer.  In this novel work, Code Fragments are extended to include learned functionality at the root nodes of the sub-trees in a technique known as XCSCF². A new compaction technique is designed, which produces an equivalent set of ternary rules from CF rules. This technique is known as XCSCF3. The work culminates with a new technique XCSCF*, which combines Layered Learning, Code Fragments and Transfer Learning (TL) of knowledge and functionality to produce scalable and general solutions, i.e. to the n-bit multiplexer problem.  The novel ideas are tested with the multiplexer and hidden multiplexer problems. These problems are chosen because they are difficult due to epistasis, sparsity and non-linearity. Therefore they provide ample opportunity for testing the new contributions.  The thesis work has shown that CFs can be used in various ways to increase scalability and to discover solutions to complex problems. Specifically the following three contributions were produced: learned functionality was captured in LCS populations from smaller problems and was reused in the root nodes of CF sub-trees. An online compaction technique that facilitates reduced evaluation time of CFs was designed. A layered learning method to train a CF system in a manner leading to a general solution was developed. This was demonstrated through learning a solution to a previously intractable problem, i.e. the n-bit Multiplexer. The thesis concludes with suggestions for future work aimed at providing better scalability when using compaction techniques.</p>


2021 ◽  
Author(s):  
◽  
Isidro M. Alvarez

<p>Learning is an important activity through which humanity has incrementally improved accomplishing tasks by adapting knowledge and methods based on the related feedback. Although learning is natural to humans, it is much more difficult to achieve in the technological world as tasks are often learned in isolation. Software is capable of learning novel techniques and algorithms in order to solve these basic, individual problems, however transferring said knowledge to other problems in the same or related domains presents challenges. Solutions often cannot be enumerated to discover the best one as many problems of interest can be intractable in terms of the resources needed to successfully complete them. However, many such problems contain key building blocks of knowledge that can be leveraged to achieve a suitable solution. These building blocks encapsulate important structural regularities of the problem. A technique that can learn these regularities without enumeration,may produce general solutions that apply to similar problems of any length. This implies reusing learned information.  In order to reuse learned blocks of knowledge, it is important that a program be scalable and flexible. This requires a program capable of taking knowledge from a previous task and applying it to a more complex problem or a problem with a similar pattern. This is anticipated to enable the program to complete the new task in a practical amount of time and with reasonable amounts of resources.  In machine learning, the degree of human intervention in solving problems is often important in many tasks. It is generally necessary for a human to provide input to direct and improve learning. In the field of Developmental Learning there is the idea known as the Threshold Concept (TC). A TC is transformative information which advocates learning. TCs are important because without them, the learner cannot progress. In addition, TCs need to be learned in a particular order, much like a curriculum, thus providing the student with viable progress towards learning more difficult ideas at a faster pace than otherwise. Therefore, human input to a learning algorithm can be to partition a problem into constituent subproblems. This is a principal concept of Layered Learning (LL),where a sequence of sub-problems are learned. The sub-problems are self-contained stages which have been separated by a human. This technique is necessary for tasks in which learning a direct mapping from inputs to outputs is intractable given existing learning algorithms.  One of the first artificial learning systems developed is Learning Classifier Systems (LCSs). Past work has extended LCSs to provide more expressivity by using richer representations. One such representation is tree-based and is common to the Genetic Programming (GP) technique. GP is part of the Evolutionary Computation (EC) paradigm and produces solutions represented by trees. The tree nodes can contain functions, and leaf nodes problem features, giving GP a rich representation. A more recent technique is Code Fragments (CFs). CFs are GP-like sub-trees with an initial maximum height of two. Initially, CFs contained hard-coded functions at the root nodes and problem features or previously learned CFs at the leaf nodes of the sub-trees. CFs provided improved expressivity and scalability over the original ternary alphabet used by LCSs. Additionally, CF-based systems have successfully learned previously intractable problems, e.g. 135-bit multiplexer.  Although CFs have provided increased scalability, they suffer from a structural weakness. As the problem scales, the chains of CFs grow to intractable lengths. This means that at some point the LCS will stop learning. In addition, CFs were originally meant to scale to more complex problems in the same domain. However, it is advantageous to compile cross-domain solutions, as the regularities of a problem might be from different domains to that expressed by the data.  The proposed thesis is that a CF-based LCS can scale to complex problems by reusing learned solutions of problems as functions at the inner nodes of CFs together with compaction and Layered Learning. The overall goal is divided into the following three sub-goals: reuse learned functionality from smaller problems in the root nodes of CF sub-trees, identify a compaction technique that facilitates reduced solution size for improved evaluation time of CFs and develop a layered learning methodology for a CF system, which will be demonstrated by learning a general solution to an intractable problem, i.e. n-bit Multiplexer.  In this novel work, Code Fragments are extended to include learned functionality at the root nodes of the sub-trees in a technique known as XCSCF². A new compaction technique is designed, which produces an equivalent set of ternary rules from CF rules. This technique is known as XCSCF3. The work culminates with a new technique XCSCF*, which combines Layered Learning, Code Fragments and Transfer Learning (TL) of knowledge and functionality to produce scalable and general solutions, i.e. to the n-bit multiplexer problem.  The novel ideas are tested with the multiplexer and hidden multiplexer problems. These problems are chosen because they are difficult due to epistasis, sparsity and non-linearity. Therefore they provide ample opportunity for testing the new contributions.  The thesis work has shown that CFs can be used in various ways to increase scalability and to discover solutions to complex problems. Specifically the following three contributions were produced: learned functionality was captured in LCS populations from smaller problems and was reused in the root nodes of CF sub-trees. An online compaction technique that facilitates reduced evaluation time of CFs was designed. A layered learning method to train a CF system in a manner leading to a general solution was developed. This was demonstrated through learning a solution to a previously intractable problem, i.e. the n-bit Multiplexer. The thesis concludes with suggestions for future work aimed at providing better scalability when using compaction techniques.</p>


2021 ◽  
pp. 869-890

This chapter describes remote and rural surgery. For many years, surgeons working in remote and isolated areas have failed to receive the recognition they deserve. Anyone living in a remote and rural area will know of lives saved and diseases cured by locally based surgeons. Delivery of surgical services to remote and rural areas remains an intractable problem in many countries. Recently, the Royal College of Surgeons of Edinburgh has published a report titled ‘Standards informing delivery of care in rural surgery’ (2016); it provides explanations of how surgical services can be provided to remote and rural communities in a safe and appropriate manner. The chapter then looks at rural surgical practice; rural general hospitals; care pathways; and the recruitment, retention, and training of rural surgeons.


Cancers ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 5296
Author(s):  
Jin Li ◽  
Tao Wei ◽  
Jian Zhang ◽  
Tingbo Liang

The intraductal papillary mucinous neoplasm (IPMN) is attracting research attention because of its increasing incidence and proven potential to progress into invasive pancreatic ductal adenocarcinoma (PDAC). In this review, we summarized the key signaling pathways or protein complexes (GPCR, TGF, SWI/SNF, WNT, and PI3K) that appear to be involved in IPMN pathogenesis. In addition, we collected information regarding all the genetic mouse models that mimic the human IPMN phenotype with specific immunohistochemistry techniques. The mouse models enable us to gain insight into the complex mechanism of the origin of IPMN, revealing that it can be developed from both acinar cells and duct cells according to different models. Furthermore, recent genomic studies describe the potential mechanism by which heterogeneous IPMN gives rise to malignant carcinoma through sequential, branch-off, or de novo approaches. The most intractable problem is that the risk of malignancy persists to some extent even if the primary IPMN is excised with a perfect margin, calling for the re-evaluation and improvement of diagnostic, pre-emptive, and therapeutic measures.


2021 ◽  
Author(s):  
Timon Wittenstein ◽  
Nava Leibovich ◽  
Andreas Hilfinger

Quantifying biochemical reaction rates within complex cellular processes remains a key challenge of systems biology even as high-throughput single-cell data have become available to characterize snapshots of population variability. That is because complex systems with stochastic and non-linear interactions are difficult to analyze when not all components can be observed simultaneously and systems cannot be followed over time. Instead of using descriptive statistical models, we show that incompletely specified mechanistic models can be used to translate qualitative knowledge of interactions into reaction rate functions from covariability data between pairs of components. This promises to turn a globally intractable problem into a sequence of solvable inference problems to quantify complex interaction networks from incomplete snapshots of their stochastic fluctuations.


2021 ◽  
Vol 53 (3) ◽  
pp. 687-715
Author(s):  
Iker Perez ◽  
Giuliano Casale

AbstractQueueing networks are stochastic systems formed by interconnected resources routing and serving jobs. They induce jump processes with distinctive properties, and find widespread use in inferential tasks. Here, service rates for jobs and potential bottlenecks in the routing mechanism must be estimated from a reduced set of observations. However, this calls for the derivation of complex conditional density representations, over both the stochastic network trajectories and the rates, which is considered an intractable problem. Numerical simulation procedures designed for this purpose do not scale, because of high computational costs; furthermore, variational approaches relying on approximating measures and full independence assumptions are unsuitable. In this paper, we offer a probabilistic interpretation of variational methods applied to inference tasks with queueing networks, and show that approximating measure choices routinely used with jump processes yield ill-defined optimization problems. Yet we demonstrate that it is still possible to enable a variational inferential task, by considering a novel space expansion treatment over an analogous counting process for job transitions. We present and compare exemplary use cases with practical queueing networks, showing that our framework offers an efficient and improved alternative where existing variational or numerically intensive solutions fail.


2021 ◽  
Vol 14 (13) ◽  
pp. 3362-3375
Author(s):  
Remmelt Ammerlaan ◽  
Gilbert Antonius ◽  
Marc Friedman ◽  
H M Sajjad Hossain ◽  
Alekh Jindal ◽  
...  

Modern data processing systems require optimization at massive scale, and using machine learning to optimize these systems (ML-for-systems) has shown promising results. Unfortunately, ML-for-systems is subject to over generalizations that do not capture the large variety of workload patterns, and tend to augment the performance of certain subsets in the workload while regressing performance for others. In this paper, we introduce a performance safeguard system, called PerfGuard , that designs pre-production experiments for deploying ML-for-systems. Instead of searching the entire space of query plans (a well-known, intractable problem), we focus on query plan deltas (a significantly smaller space). PerfGuard formalizes these differences, and correlates plan deltas to important feedback signals, like execution cost. We describe the deep learning architecture and the end-to-end pipeline in PerfGuard that could be used with general relational databases. We show that this architecture improves on baseline models, and that our pipeline identifies key query plan components as major contributors to plan disparity. Offline experimentation shows PerfGuard as a promising approach, with many opportunities for future improvement.


Sign in / Sign up

Export Citation Format

Share Document