problem domain
Recently Published Documents


TOTAL DOCUMENTS

246
(FIVE YEARS 59)

H-INDEX

19
(FIVE YEARS 3)

2022 ◽  
Vol 7 (4) ◽  
pp. 5634-5661
Author(s):  
M. Adams ◽  
◽  
J. Finden ◽  
P. Phoncharon ◽  
P. H. Muir

<abstract><p>The high quality COLSYS/COLNEW collocation software package is widely used for the numerical solution of boundary value ODEs (BVODEs), often through interfaces to computing environments such as Scilab, R, and Python. The continuous collocation solution returned by the code is much more accurate at a set of mesh points that partition the problem domain than it is elsewhere; the mesh point values are said to be superconvergent. In order to improve the accuracy of the continuous solution approximation at non-mesh points, when the BVODE is expressed in first order system form, an approach based on continuous Runge-Kutta (CRK) methods has been used to obtain a superconvergent interpolant (SCI) across the problem domain. Based on this approach, recent work has seen the development of a new, more efficient version of COLSYS/COLNEW that returns an error controlled SCI.</p> <p>However, most systems of BVODEs include higher derivatives and a feature of COLSYS/COLNEW is that it can directly treat such mixed order BVODE systems, resulting in improved efficiency, continuity of the approximate solution, and user convenience. In this paper we generalize the approach mentioned above for first order systems to obtain SCIs for collocation solutions of mixed order BVODE systems. The main contribution of this paper is the derivation of generalizations of continuous Runge-Kutta-Nyström methods that form the basis for SCIs for this more general problem class. We provide numerical results that (ⅰ) show that the SCIs are much more accurate than the collocation solutions at non-mesh points, (ⅱ) verify the order of accuracy of these SCIs, and (ⅲ) show that the cost of utilizing the SCIs is a small fraction of the cost of computing the collocation solution upon which they are based.</p></abstract>


Author(s):  
Tarang Singhal

Abstract: In today’s fast growing world, The Information is the most powerful tool. A Doctor is an expert in medical sciences, an Engineer is an expert in Technical things similarly a fitness trainer is an expert in fitness related things. But What if a Person has some problem like a technical problem and he is not an expert in technical things ? In such situations A Person wants to have some expert advice. Now to overcome such problems a person may want to communicate with someone who is an expert in his problem domain. So Our Goal of the project is to reduce the gap between the problem seeker and the experts.


2021 ◽  
Vol 2131 (2) ◽  
pp. 022123
Author(s):  
E N Ostroukh ◽  
M V Privalov ◽  
S D Markin

Abstract In the paper as a problem domain was chosen oil mining and its peculiarities related to early fire diagnostics. Main feature of the described method of early fire diagnostics is application of color detection algorithm together with video sequence acquired from survey cameras. Drawback of the known algorithms of fire diagnostics that also use video streams is selection of the only one color of visible spectrum. Proposed algorithm makes frames preprocessing with purpose of white noise and Gaussian noise suppression. Main feature is complex registration of color components of fire images that are specific to chosen problem domain. Described obtained results of practical application of proposed color detection approach. Experiments were carried out using test video sequences from Bilkent University and Dyntex database. It is shown that advantage of the proposed approach is an ability to select different color components and process them in complex during color detection.


Author(s):  
Paul Heistracher ◽  
Claas Abert ◽  
Florian Bruckner ◽  
Thomas Schrefl ◽  
Dieter Suess

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2947
Author(s):  
Anton A. Romanov ◽  
Aleksey A. Filippov ◽  
Valeria V. Voronina ◽  
Gleb Guskov ◽  
Nadezhda G. Yarushkina

Data analysis in the context of the features of the problem domain and the dynamics of processes are significant in various industries. Uncertainty modeling based on fuzzy logic allows building approximators for solving a large class of problems. In some cases, type-2 fuzzy sets in the model are used. The article describes constructing fuzzy time series models of the analyzed processes within the context of the problem domain. An algorithm for fuzzy modeling of the time series was developed. A new time series forecasting scheme is proposed. An illustrative example of the time series modeling is presented. The benefits of contextual modeling are demonstrated.


2021 ◽  
Author(s):  
◽  
Richard J. Marshall

<p>The development of a heuristic to solve an optimisation problem in a new domain, or a specific variation of an existing problem domain, is often beyond the means of many smaller businesses. This is largely due to the task normally needing to be assigned to a human expert, and such experts tend to be scarce and expensive. One of the aims of hyper-heuristic research is to automate all or part of the heuristic development process and thereby bring the generation of new heuristics within the means of more organisations. A second aim of hyper-heuristic research is to ensure that the process by which a domain specific heuristic is developed is itself independent of the problem domain. This enables a hyper-heuristic to exist and operate above the combinatorial optimisation problem “domain barrier” and generalise across different problem domains.  A common issue with heuristic development is that a heuristic is often designed or evolved using small size problem instances and then assumed to perform well on larger problem instances. The goal of this thesis is to extend current hyper-heuristic research towards answering the question: How can a hyper-heuristic efficiently and effectively adapt the selection, generation and manipulation of domain specific heuristics as you move from small size and/or narrow domain problems to larger size and/or wider domain problems? In other words, how can different hyperheuristics respond to scalability issues?  Each hyper-heuristic has its own strengths and weaknesses. In the context of hyper-heuristic research, this thesis contributes towards understanding scalability issues by firstly developing a compact and effective heuristic that can be applied to other problem instances of differing sizes in a compatible problem domain. We construct a hyper-heuristic for the Capacitated Vehicle Routing Problem domain to establish whether a heuristic for a specific problem domain can be developed which is compact and easy to interpret. The results show that generation of a simple but effective heuristic is possible.  Secondly we develop two different types of hyper-heuristic and compare their performance across different combinatorial optimisation problem domains. We construct and compare simplified versions of two existing hyper-heuristics (adaptive and grammar-based), and analyse how each handles the trade-off between computation speed and quality of the solution. The performance of the two hyper-heuristics are tested on seven different problem domains compatible with the HyFlex (Hyper-heuristic Flexible) framework. The results indicate that the adaptive hyper-heuristic is able to deliver solutions of a pre-defined quality in a shorter computational time than the grammar-based hyper-heuristic.  Thirdly we investigate how the adaptive hyper-heuristic developed in the second stage of this thesis can respond to problem instances of the same size, but containing different features and complexity. We investigate how, with minimal knowledge about the problem domain and features of the instance being worked on, a hyper-heuristic can modify its processes to respond to problem instances containing different features and problem domains of different complexity. In this stage we allow the adaptive hyper-heuristic to select alternative vectors for the selection of problem domain operators, and acceptance criteria used to determine whether solutions should be retained or discarded. We identify a consistent difference between the best performing pairings of selection vector and acceptance criteria, and those pairings which perform poorly.  This thesis shows that hyper-heuristics can respond to scalability issues, although not all do so with equal ease. The flexibility of an adaptive hyper-heuristic enables it to perform faster than the more rigid grammar-based hyper-heuristic, but at the expense of losing a reusable heuristic.</p>


2021 ◽  
Author(s):  
◽  
Richard J. Marshall

<p>The development of a heuristic to solve an optimisation problem in a new domain, or a specific variation of an existing problem domain, is often beyond the means of many smaller businesses. This is largely due to the task normally needing to be assigned to a human expert, and such experts tend to be scarce and expensive. One of the aims of hyper-heuristic research is to automate all or part of the heuristic development process and thereby bring the generation of new heuristics within the means of more organisations. A second aim of hyper-heuristic research is to ensure that the process by which a domain specific heuristic is developed is itself independent of the problem domain. This enables a hyper-heuristic to exist and operate above the combinatorial optimisation problem “domain barrier” and generalise across different problem domains.  A common issue with heuristic development is that a heuristic is often designed or evolved using small size problem instances and then assumed to perform well on larger problem instances. The goal of this thesis is to extend current hyper-heuristic research towards answering the question: How can a hyper-heuristic efficiently and effectively adapt the selection, generation and manipulation of domain specific heuristics as you move from small size and/or narrow domain problems to larger size and/or wider domain problems? In other words, how can different hyperheuristics respond to scalability issues?  Each hyper-heuristic has its own strengths and weaknesses. In the context of hyper-heuristic research, this thesis contributes towards understanding scalability issues by firstly developing a compact and effective heuristic that can be applied to other problem instances of differing sizes in a compatible problem domain. We construct a hyper-heuristic for the Capacitated Vehicle Routing Problem domain to establish whether a heuristic for a specific problem domain can be developed which is compact and easy to interpret. The results show that generation of a simple but effective heuristic is possible.  Secondly we develop two different types of hyper-heuristic and compare their performance across different combinatorial optimisation problem domains. We construct and compare simplified versions of two existing hyper-heuristics (adaptive and grammar-based), and analyse how each handles the trade-off between computation speed and quality of the solution. The performance of the two hyper-heuristics are tested on seven different problem domains compatible with the HyFlex (Hyper-heuristic Flexible) framework. The results indicate that the adaptive hyper-heuristic is able to deliver solutions of a pre-defined quality in a shorter computational time than the grammar-based hyper-heuristic.  Thirdly we investigate how the adaptive hyper-heuristic developed in the second stage of this thesis can respond to problem instances of the same size, but containing different features and complexity. We investigate how, with minimal knowledge about the problem domain and features of the instance being worked on, a hyper-heuristic can modify its processes to respond to problem instances containing different features and problem domains of different complexity. In this stage we allow the adaptive hyper-heuristic to select alternative vectors for the selection of problem domain operators, and acceptance criteria used to determine whether solutions should be retained or discarded. We identify a consistent difference between the best performing pairings of selection vector and acceptance criteria, and those pairings which perform poorly.  This thesis shows that hyper-heuristics can respond to scalability issues, although not all do so with equal ease. The flexibility of an adaptive hyper-heuristic enables it to perform faster than the more rigid grammar-based hyper-heuristic, but at the expense of losing a reusable heuristic.</p>


2021 ◽  
Author(s):  
◽  
Muhammad Iqbal

<p>Using evolutionary intelligence and machine learning techniques, a broad range of intelligent machines have been designed to perform different tasks. An intelligent machine learns by perceiving its environmental status and taking an action that maximizes its chances of success. Human beings have the ability to apply knowledge learned from a smaller problem to more complex, large-scale problems of the same or a related domain, but currently the vast majority of evolutionary machine learning techniques lack this ability. This lack of ability to apply the already learned knowledge of a domain results in consuming more than the necessary resources and time to solve complex, large-scale problems of the domain. As the problem increases in size, it becomes difficult and even sometimes impractical (if not impossible) to solve due to the needed resources and time. Therefore, in order to scale in a problem domain, a systemis needed that has the ability to reuse the learned knowledge of the domain and/or encapsulate the underlying patterns in the domain. To extract and reuse building blocks of knowledge or to encapsulate the underlying patterns in a problem domain, a rich encoding is needed, but the search space could then expand undesirably and cause bloat, e.g. as in some forms of genetic programming (GP). Learning classifier systems (LCSs) are a well-structured evolutionary computation based learning technique that have pressures to implicitly avoid bloat, such as fitness sharing through niche based reproduction. The proposed thesis is that an LCS can scale to complex problems in a domain by reusing the learnt knowledge from simpler problems of the domain and/or encapsulating the underlying patterns in the domain. Wilson’s XCS is used to implement and test the proposed systems, which is a well-tested,  online learning and accuracy based LCS model. To extract the reusable building  blocks of knowledge, GP-tree like, code-fragments are introduced, which are more  than simply another representation (e.g. ternary or real-valued alphabets). This  thesis is extended to capture the underlying patterns in a problemusing a cyclic  representation. Hard problems are experimented to test the newly developed scalable  systems and compare them with benchmark techniques. Specifically, this work develops four systems to improve the scalability of XCS-based classifier systems. (1) Building blocks of knowledge are extracted fromsmaller problems of a Boolean domain and reused in learning more complex, large-scale problems in the domain, for the first time. By utilizing the learnt knowledge from small-scale problems, the developed XCSCFC (i.e. XCS with Code-Fragment Conditions) system readily solves problems of a scale that existing LCS and GP approaches cannot, e.g. the 135-bitMUX problem. (2) The introduction of the code fragments in classifier actions in XCSCFA (i.e. XCS with Code-Fragment Actions) enables the rich representation of GP, which when couples with the divide and conquer approach of LCS, to successfully solve various complex, overlapping and niche imbalance Boolean problems that are difficult to solve using numeric action based XCS. (3) The underlying patterns in a problem domain are encapsulated in classifier rules encoded by a cyclic representation. The developed XCSSMA system produces general solutions of any scale n for a number of important Boolean problems, for the first time in the field of LCS, e.g. parity problems. (4) Optimal solutions for various real-valued problems are evolved by extending the existing real-valued XCSR system with code-fragment actions to XCSRCFA. Exploiting the combined power of GP and LCS techniques, XCSRCFA successfully learns various continuous action and function approximation problems that are difficult to learn using the base techniques. This research work has shown that LCSs can scale to complex, largescale problems through reusing learnt knowledge. The messy nature, disassociation of  message to condition order, masking, feature construction, and reuse of extracted knowledge add additional abilities to the XCS family of LCSs. The ability to use  rich encoding in antecedent GP-like codefragments or consequent cyclic representation  leads to the evolution of accurate, maximally general and compact solutions in learning  various complex Boolean as well as real-valued problems. Effectively exploiting the combined power of GP and LCS techniques, various continuous action and function approximation problems are solved in a simple and straight forward manner. The analysis of the evolved rules reveals, for the first time in XCS, that no matter how specific or general the initial classifiers are, all the optimal classifiers are converged through the mechanism ‘be specific then generalize’ near the final stages of evolution. Also that standard XCS does not use all available information or all available genetic operators to evolve optimal rules, whereas the developed code-fragment action based systems effectively use figure  and ground information during the training process. Thiswork has created a platformto explore the reuse of learnt functionality, not just terminal knowledge as present, which is needed to replicate human capabilities.</p>


2021 ◽  
Author(s):  
◽  
Muhammad Iqbal

<p>Using evolutionary intelligence and machine learning techniques, a broad range of intelligent machines have been designed to perform different tasks. An intelligent machine learns by perceiving its environmental status and taking an action that maximizes its chances of success. Human beings have the ability to apply knowledge learned from a smaller problem to more complex, large-scale problems of the same or a related domain, but currently the vast majority of evolutionary machine learning techniques lack this ability. This lack of ability to apply the already learned knowledge of a domain results in consuming more than the necessary resources and time to solve complex, large-scale problems of the domain. As the problem increases in size, it becomes difficult and even sometimes impractical (if not impossible) to solve due to the needed resources and time. Therefore, in order to scale in a problem domain, a systemis needed that has the ability to reuse the learned knowledge of the domain and/or encapsulate the underlying patterns in the domain. To extract and reuse building blocks of knowledge or to encapsulate the underlying patterns in a problem domain, a rich encoding is needed, but the search space could then expand undesirably and cause bloat, e.g. as in some forms of genetic programming (GP). Learning classifier systems (LCSs) are a well-structured evolutionary computation based learning technique that have pressures to implicitly avoid bloat, such as fitness sharing through niche based reproduction. The proposed thesis is that an LCS can scale to complex problems in a domain by reusing the learnt knowledge from simpler problems of the domain and/or encapsulating the underlying patterns in the domain. Wilson’s XCS is used to implement and test the proposed systems, which is a well-tested,  online learning and accuracy based LCS model. To extract the reusable building  blocks of knowledge, GP-tree like, code-fragments are introduced, which are more  than simply another representation (e.g. ternary or real-valued alphabets). This  thesis is extended to capture the underlying patterns in a problemusing a cyclic  representation. Hard problems are experimented to test the newly developed scalable  systems and compare them with benchmark techniques. Specifically, this work develops four systems to improve the scalability of XCS-based classifier systems. (1) Building blocks of knowledge are extracted fromsmaller problems of a Boolean domain and reused in learning more complex, large-scale problems in the domain, for the first time. By utilizing the learnt knowledge from small-scale problems, the developed XCSCFC (i.e. XCS with Code-Fragment Conditions) system readily solves problems of a scale that existing LCS and GP approaches cannot, e.g. the 135-bitMUX problem. (2) The introduction of the code fragments in classifier actions in XCSCFA (i.e. XCS with Code-Fragment Actions) enables the rich representation of GP, which when couples with the divide and conquer approach of LCS, to successfully solve various complex, overlapping and niche imbalance Boolean problems that are difficult to solve using numeric action based XCS. (3) The underlying patterns in a problem domain are encapsulated in classifier rules encoded by a cyclic representation. The developed XCSSMA system produces general solutions of any scale n for a number of important Boolean problems, for the first time in the field of LCS, e.g. parity problems. (4) Optimal solutions for various real-valued problems are evolved by extending the existing real-valued XCSR system with code-fragment actions to XCSRCFA. Exploiting the combined power of GP and LCS techniques, XCSRCFA successfully learns various continuous action and function approximation problems that are difficult to learn using the base techniques. This research work has shown that LCSs can scale to complex, largescale problems through reusing learnt knowledge. The messy nature, disassociation of  message to condition order, masking, feature construction, and reuse of extracted knowledge add additional abilities to the XCS family of LCSs. The ability to use  rich encoding in antecedent GP-like codefragments or consequent cyclic representation  leads to the evolution of accurate, maximally general and compact solutions in learning  various complex Boolean as well as real-valued problems. Effectively exploiting the combined power of GP and LCS techniques, various continuous action and function approximation problems are solved in a simple and straight forward manner. The analysis of the evolved rules reveals, for the first time in XCS, that no matter how specific or general the initial classifiers are, all the optimal classifiers are converged through the mechanism ‘be specific then generalize’ near the final stages of evolution. Also that standard XCS does not use all available information or all available genetic operators to evolve optimal rules, whereas the developed code-fragment action based systems effectively use figure  and ground information during the training process. Thiswork has created a platformto explore the reuse of learnt functionality, not just terminal knowledge as present, which is needed to replicate human capabilities.</p>


Sign in / Sign up

Export Citation Format

Share Document