Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 881
Author(s):  
Catalina-Lucia Cocianu ◽  
Alexandru Daniel Stan ◽  
Mihai Avramescu

The main aim of the reported work is to solve the registration problem for recognition purposes. We introduce two new evolutionary algorithms (EA) consisting of population-based search methods, followed by or combined with a local search scheme. We used a variant of the Firefly algorithm to conduct the population-based search, while the local exploration was implemented by the Two-Membered Evolutionary Strategy (2M-ES). Both algorithms use fitness function based on mutual information (MI) to direct the exploration toward an appropriate candidate solution. A good similarity measure is the one that enables us to predict well, and with the symmetric MI we tie similarity between two objects A and B directly to how well A predicts B, and vice versa. Since the search landscape of normalized mutual information proved more amenable for evolutionary computation algorithms than simple MI, we use normalized mutual information (NMI) defined as symmetric uncertainty. The proposed algorithms are tested against the well-known Principal Axes Transformation technique (PAT), a standard evolutionary strategy and a version of the Firefly algorithm developed to align images. The accuracy and the efficiency of the proposed algorithms are experimentally confirmed by our tests, both methods being excellently fitted to registering images.


2013 ◽  
Vol 31 (7) ◽  
pp. 938-943 ◽  
Author(s):  
Silva Saarinen ◽  
Eero Pukkala ◽  
Pia Vahteristo ◽  
Markus J. Mäkinen ◽  
Kaarle Franssila ◽  
...  

Purpose Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) is one of the two established Hodgkin lymphoma (HL) subtypes. The risk factors of NLPHL are largely unknown. In general, genetic factors are known to have a modest effect on the risk of HL; however, familial risk in NLPHL has not been previously examined. We conducted a population-based study by using the Finnish registries and evaluated the familial risk in NLPHL. Patients and Methods We launched a population-based search to identify patients with NLPHL and their relatives by examining the records of the Finnish Cancer Registry, established in 1953, and the official Finnish population registries. We collected a data set of 692 patients with NLPHL, identified their 4,280 first-degree relatives, and calculated the registry-based standardized incidence ratios (SIRs) for different cancers in the first-degree relatives. In addition, the primary tumor biopsies of HL-affected relatives were collected when possible, the HL diagnoses were re-reviewed by a hematopathologist, and the SIR for NLPHL was calculated on the basis of confirmed NLPHL diagnoses. Results On the basis of confirmed NLPHL diagnoses, the SIR for NLPHL was 19 (95% CI, 8.8 to 36) in the first-degree relatives. The risk was most prominent in female relatives of young patients. The registry-based SIR for classical HL was 5.3 (95% CI, 3.0 to 8.8), and for non-Hodgkin lymphoma, it was 1.9 (95% CI, 1.3 to 2.6). Conclusion Our results implicate an unexpectedly high familial component in the development of NLPHL. Research is warranted to identify the putative genetic and environmental factors underlying this finding and to develop strategies for better management of patients with NLPHL and their relatives.


2021 ◽  
Author(s):  
◽  
Benjamin Evans

<p>Ensemble learning is one of the most powerful extensions for improving upon individual machine learning models. Rather than a single model being used, several models are trained and the predictions combined to make a more informed decision. Such combinations will ideally overcome the shortcomings of any individual member of the ensemble. Most ma- chine learning competition winners feature an ensemble of some sort, and there is also sound theoretical proof to the performance of certain ensem- bling schemes. The benefits of ensembling are clear in both theory and practice.  Despite the great performance, ensemble learning is not a trivial task. One of the main difficulties is designing appropriate ensembles. For exam- ple, how large should an ensemble be? What members should be included in an ensemble? How should these members be weighted? Our first contribution addresses these concerns using a strongly-typed population- based search (genetic programming) to construct well-performing ensem- bles, where the entire ensemble (members, hyperparameters, structure) is automatically learnt. The proposed method was found, in general, to be significantly better than all base members and commonly used compari- son methods trialled.  With automatically designed ensembles, there is a range of applica- tions, such as competition entries, forecasting and state-of-the-art predic- tions. However, often these applications also require additional prepro- cessing of the input data. Above the ensemble considers only the original training data, however, in many machine learning scenarios a pipeline is required (for example performing feature selection before classification). For the second contribution, a novel automated machine learning method is proposed based on ensemble learning. This method uses a random population-based search of appropriate tree structures, and as such is em- barrassingly parallel, an important consideration for automated machine learning. The proposed method is able to achieve equivalent or improved results over the current state-of-the-art methods and does so in a fraction of the time (six times as fast).  Finally, while complex ensembles offer great performance, one large limitation is the interpretability of such ensembles. For example, why does a forest of 500 trees predict a particular class for a given instance? In an effort to explain the behaviour of complex models (such as ensem- bles), several methods have been proposed. However, these approaches tend to suffer at least one of the following limitations: overly complex in the representation, local in their application, limited to particular fea- ture types (i.e. categorical only), or limited to particular algorithms. For our third contribution, a novel model agnostic method for interpreting complex black-box machine learning models is proposed. The method is based on strongly-typed genetic programming and overcomes the afore- mentioned limitations. Multi-objective optimisation is used to generate a Pareto frontier of simple and explainable models which approximate the behaviour of much more complex methods. We found the resulting rep- resentations are far simpler than existing approaches (an important con- sideration for interpretability) while providing equivalent reconstruction performance.  Overall, this thesis addresses two of the major limitations of existing ensemble learning, i.e. the complex construction process and the black- box models that are often difficult to interpret. A novel application of ensemble learning in the field of automated machine learning is also pro- posed. All three methods have shown at least equivalent or improved performance than existing methods.</p>


2014 ◽  
Vol 14 (4) ◽  
pp. 257-267 ◽  
Author(s):  
Pandu R. Vundavilli ◽  
B. Surekha ◽  
Mahesh B. Parappagoudar

AbstractResin bonded sand system is an emerging area, and it can be used to produce dimensionally accurate castings with good surface finish. In the present paper, experimental investigations are carried out on the resin bonded cores, to develop a non-linear mathematical model, using the concept of design of experiments. Subsequently, an artificial neural network (ANN) with four neurons each on input and output layers has been used to model the resin bonded sand system. It is important to note that the process parameters, such as percentage of resin, percentage of hardener, number of strokes and curing time are considered as inputs and the mechanical properties of the core, namely compression strength, tensile strength, shear strength and permeability are treated as the outputs of the network. It is to be noted that the performance of developed ANN depends on several factors of the network, such as type of transfer functions, coefficients of transfer functions, number of neurons in the hidden layer and connecting weights between different layers. In the present study, two population based search and optimization algorithms, namely genetic algorithm (GA) and artificial bee colony (ABC) are used for optimizing the parameters of ANN. It has been observed that both GA and ABC trained neural networks (that is, GA-NN and ABC-NN) are found to have good agreement with the experimental data and can be used effectively to model the resin bonded core sand system.


Author(s):  
Cecília Reis ◽  
◽  
J. A. Tenreiro Machado

This paper is devoted to the synthesis of combinational logic circuits through computational intelligence or, more precisely, using evolutionary computation techniques. Are studied two evolutionary algorithms, the Genetic and the Memetic Algorithm (GAs, MAs) and one swarm intelligence algorithm, the Particle Swarm Optimization (PSO). GAs are optimization and search techniques based on the principles of genetics and natural selection. MAs are evolutionary algorithms that include a stage of individual optimization as part of its search strategy, being the individual optimization in the form of a local search. The PSO is a population-based search algorithm that starts with a population of random solutions called particles. This paper presents the results for digital circuits design using the three above algorithms. The results show the statistical characteristics of this algorithms with respect to the number of generations required to achieve the solutions. The article analyzes also a new fitness function that includes an error discontinuity measure, which demonstrated to improve significantly the performance of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document