Recursive Learning of Genetic Algorithms with Task Decomposition and Varied Rule Set

2011 ◽  
Vol 2 (4) ◽  
pp. 1-24 ◽  
Author(s):  
Lei Fang ◽  
Sheng-Uei Guan ◽  
Haofan Zhang

Rule-based Genetic Algorithms (GAs) have been used in the application of pattern classification (Corcoran & Sen, 1994), but conventional GAs have weaknesses. First, the time spent on learning is long. Moreover, the classification accuracy achieved by a GA is not satisfactory. These drawbacks are due to existing undesirable features embedded in conventional GAs. The number of rules within the chromosome of a GA classifier is usually set and fixed before training and is not problem-dependent. Secondly, conventional approaches train the data in batch without considering whether decomposition solves the problem. Thirdly, when facing large-scale real-world problems, GAs cannot utilise resources efficiently, leading to premature convergence. Based on these observations, this paper develops a novel algorithmic framework that features automatic domain and task decomposition and problem-dependent chromosome length (rule number) selection to resolve these undesirable features. The proposed Recursive Learning of Genetic Algorithm with Task Decomposition and Varied Rule Set (RLGA) method is recursive and trains and evolves a team of learners using the concept of local fitness to decompose the original problem into sub-problems. RLGA performs better than GAs and other related solutions regarding training duration and generalization accuracy according to the experimental results.

Author(s):  
Lei Fang ◽  
Sheng-Uei Guan ◽  
Haofan Zhang

Rule-based Genetic Algorithms (GAs) have been used in the application of pattern classification (Corcoran & Sen, 1994), but conventional GAs have weaknesses. First, the time spent on learning is long. Moreover, the classification accuracy achieved by a GA is not satisfactory. These drawbacks are due to existing undesirable features embedded in conventional GAs. The number of rules within the chromosome of a GA classifier is usually set and fixed before training and is not problem-dependent. Secondly, conventional approaches train the data in batch without considering whether decomposition solves the problem. Thirdly, when facing large-scale real-world problems, GAs cannot utilise resources efficiently, leading to premature convergence. Based on these observations, this paper develops a novel algorithmic framework that features automatic domain and task decomposition and problem-dependent chromosome length (rule number) selection to resolve these undesirable features. The proposed Recursive Learning of Genetic Algorithm with Task Decomposition and Varied Rule Set (RLGA) method is recursive and trains and evolves a team of learners using the concept of local fitness to decompose the original problem into sub-problems. RLGA performs better than GAs and other related solutions regarding training duration and generalization accuracy according to the experimental results.


2018 ◽  
Vol 16 (1) ◽  
pp. 67-76
Author(s):  
Disyacitta Neolia Firdana ◽  
Trimurtini Trimurtini

This research aimed to determine the properness and effectiveness of the big book media on learning equivalent fractions of fourth grade students. The method of research is Research and Development  (R&D). This study was conducted in fourth grade of SDN Karanganyar 02 Kota Semarang. Data sources from media validation, material validation, learning outcomes, and teacher and students responses on developed media. Pre-experimental research design with one group pretest-posttest design. Big book developed consist of equivalent fractions material, students learning activities sheets with rectangle and circle shape pictures, and questions about equivalent fractions. Big book was developed based on students and teacher needs. This big book fulfill the media validity of 3,75 with very good criteria and scored 3 by material experts with good criteria. In large-scale trial, the result of students posttest have learning outcomes completness 82,14%. The result of N-gain calculation with result 0,55 indicates the criterion “medium”. The t-test result 9,6320 > 2,0484 which means the average of posttest outcomes is better than the average of pretest outcomes. Based on that data, this study has produced big book media which proper and effective as a media of learning equivalent fractions of fourth grade elementary school.


2021 ◽  
Vol 9 (3) ◽  
pp. 264
Author(s):  
Shanti Bhushan ◽  
Oumnia El Fajri ◽  
Graham Hubbard ◽  
Bradley Chambers ◽  
Christopher Kees

This study evaluates the capability of Navier–Stokes solvers in predicting forward and backward plunging breaking, including assessment of the effect of grid resolution, turbulence model, and VoF, CLSVoF interface models on predictions. For this purpose, 2D simulations are performed for four test cases: dam break, solitary wave run up on a slope, flow over a submerged bump, and solitary wave over a submerged rectangular obstacle. Plunging wave breaking involves high wave crest, plunger formation, and splash up, followed by second plunger, and chaotic water motions. Coarser grids reasonably predict the wave breaking features, but finer grids are required for accurate prediction of the splash up events. However, instabilities are triggered at the air–water interface (primarily for the air flow) on very fine grids, which induces surface peel-off or kinks and roll-up of the plunger tips. Reynolds averaged Navier–Stokes (RANS) turbulence models result in high eddy-viscosity in the air–water region which decays the fluid momentum and adversely affects the predictions. Both VoF and CLSVoF methods predict the large-scale plunging breaking characteristics well; however, they vary in the prediction of the finer details. The CLSVoF solver predicts the splash-up event and secondary plunger better than the VoF solver; however, the latter predicts the plunger shape better than the former for the solitary wave run-up on a slope case.


2012 ◽  
Vol 8 (S291) ◽  
pp. 375-377 ◽  
Author(s):  
Gregory Desvignes ◽  
Ismaël Cognard ◽  
David Champion ◽  
Patrick Lazarus ◽  
Patrice Lespagnol ◽  
...  

AbstractWe present an ongoing survey with the Nançay Radio Telescope at L-band. The targeted area is 74° ≲ l < 150° and 3.5° < |b| < 5°. This survey is characterized by a long integration time (18 min), large bandwidth (512 MHz) and high time and frequency resolution (64 μs and 0.5 MHz) giving a nominal sensitivity limit of 0.055 mJy for long period pulsars. This is about 2 times better than the mid-latitude HTRU survey, and is designed to be complementary with current large scale surveys. This survey will be more sensitive to transients (RRATs, intermittent pulsars), distant and faint millisecond pulsars as well as scintillating sources (or any other kind of radio faint sources) than all previous short-integration surveys.


2018 ◽  
Vol 5 (3) ◽  
pp. 172265 ◽  
Author(s):  
Alexis R. Hernández ◽  
Carlos Gracia-Lázaro ◽  
Edgardo Brigatti ◽  
Yamir Moreno

We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7422
Author(s):  
Min-Kyu Son

Upscaling of photoelectrode for a practical photoelectrochemical (PEC) water splitting system is still challenging because the PEC performance of large-scale photoelectrode is significantly low, compared to the lab scale photoelectrode. In an effort to overcome this challenge, sputtered gold (Au) and copper (Cu) grid lines were introduced to improve the PEC performance of large-scale cuprous oxide (Cu2O) photocathode in this work. It was demonstrated that Cu grid lines are more effective than Au grid lines to improve the PEC performance of large-scale Cu2O photocathode because its intrinsic conductivity and quality of grid lines are better than ones containing Au grid lines. As a result, the PEC performance of a 25-cm2 scaled Cu2O photocathode with Cu grid lines was almost double than one without grid lines, resulting in an improved charge transport in the large area substrate by Cu grid lines. Finally, a 50-cm2 scaled Cu2O photocathode with Cu grid lines was tested in an outdoor condition under natural sun. This is the first outdoor PEC demonstration of large-scale Cu2O photocathode with Cu grid lines, which gives insight into the development of efficient upscaled PEC photoelectrode.


2016 ◽  
Author(s):  
Dominik Paprotny ◽  
Oswaldo Morales Nápoles

Abstract. Large-scale hydrological modelling of flood hazard requires adequate extreme discharge data. Models based on physics are applied alongside those utilizing only statistical analysis. The former requires enormous computation power, while the latter are most limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian Networks (BN), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables describing the geographical characteristics of their catchments. Data on annual maxima of daily discharges from more than 1800 river gauge stations were collected, together with information on terrain, land use and climate of catchments that drain to those locations. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe, and better than a comparable global statistical method. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and is not affected by a split-sample validation. The BN was applied to a large domain covering all sizes of rivers in the continent, both for present and future climate, showing large variation in influence of climate change on river discharges, as well as large differences between emission scenarios. The method could be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.


2006 ◽  
Vol 3 (4) ◽  
pp. 777-803
Author(s):  
W. Connolley ◽  
A. Keen ◽  
A. McLaren

Abstract. We present results of an implementation of the Elastic Viscous Plastic (EVP) sea ice dynamics scheme into the Hadley Centre coupled ocean-atmosphere climate model HadCM3. Although the large-scale simulation of sea ice in HadCM3 is quite good with this model, the lack of a full dynamical model leads to errors in the detailed representation of sea ice and limits our confidence in its future predictions. We find that introducing the EVP scheme results in a worse initial simulation of the sea ice. This paper documents various improvements made to improve the simulation, resulting in a sea ice simulation that is better than the original HadCM3 scheme overall. Importantly, it is more physically based and provides a more solid foundation for future improvement. We then consider the interannual variability of the sea ice in the new model and demonstrate improvements over the HadCM3 simulation.


Sign in / Sign up

Export Citation Format

Share Document