MIGSOM: A SOM Algorithm for Large Scale Hyperlinked Documents Inspired by Neuronal Migration

Author(s):  
Kotaro Nakayama ◽  
Yutaka Matsuo
Author(s):  
Kanta Tachibana ◽  
◽  
Takeshi Furuhashi

Kohonen’s Self-Organizing feature Map (SOM) is used to obtain topology-preserving mapping from high-dimensional feature space to visible space of two or fewer dimensions. The SOM algorithm uses a fixed structure of neurons in visible space and learns a dataset by updating reference points in feature space. The mapping result depends on mapping parameters fixed, which are the number and visible positions of neurons, and parameters of learning, which are the learning rate, total iteration, and the setting of neighboring radii. To obtain a satisfactory result, the user usually must try many combinations of parameters. It is wasteful, however, to set up every possible combination of parameters and to repeatedly run the algorithm from the beginning because the computation cost for learning is large, especially for a large-scale dataset. These problems arise due to the fixing of two types of mapping parameters, i.e., the number and visible positions of neurons. The high computation cost is mainly in the calculation of distances from each sample to all reference points. At the beginning of learning, reference points should be adjusted globally to preserve the topology well because they are initially set far from optimal positions in feature space, e.g. randomly. Such many reference points subdivides feature space into unnecessarily fine Voronoi regions. To avoid this computational waste, it is natural to start learning with a small number of neurons and increase the number of neurons during learning. We propose a new SOM method that varies the number and visible positions of neurons, and thus is applicable also to visible torus and sphere spaces. We apply our proposal to spherical visible space. We use central Voronoi tessellation to move visible positions for two reasons: to tessellate visible space evenly for easy visualization and to level the number of neighboring neurons and better preserve topology. We demonstrate the effect of generating neurons to reduce computation cost and of moving visible positions in visualization and topology preservation.


2020 ◽  
Vol 498 (2) ◽  
pp. 2984-2999 ◽  
Author(s):  
Carles Sánchez ◽  
Marco Raveri ◽  
Alex Alarcon ◽  
Gary M Bernstein

ABSTRACT Cosmological analyses of galaxy surveys rely on knowledge of the redshift distribution of their galaxy sample. This is usually derived from a spectroscopic and/or many-band photometric calibrator survey of a small patch of sky. The uncertainties in the redshift distribution of the calibrator sample include a contribution from shot noise, or Poisson sampling errors, but, given the small volume they probe, they are dominated by sample variance introduced by large-scale structures. Redshift uncertainties have been shown to constitute one of the leading contributions to systematic uncertainties in cosmological inferences from weak lensing and galaxy clustering, and hence they must be propagated through the analyses. In this work, we study the effects of sample variance on small-area redshift surveys, from theory to simulations to the COSMOS2015 data set. We present a three-step Dirichlet method of resampling a given survey-based redshift calibration distribution to enable the propagation of both shot noise and sample variance uncertainties. The method can accommodate different levels of prior confidence on different redshift sources. This method can be applied to any calibration sample with known redshifts and phenotypes (i.e. cells in a self-organizing map, or some other way of discretizing photometric space), and provides a simple way of propagating prior redshift uncertainties into cosmological analyses. As a worked example, we apply the full scheme to the COSMOS2015 data set, for which we also present a new, principled SOM algorithm designed to handle noisy photometric data. We make available a catalogue of the resulting resamplings of the COSMOS2015 galaxies.


Polymers ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1289
Author(s):  
José C. C. Santana ◽  
Poliana F. Almeida ◽  
Nykael Costa ◽  
Isabella Vasconcelos ◽  
Flavio Guerhardt ◽  
...  

With the increasing global population, it has become necessary to explore new alternative food sources to meet the increasing demand. However, these alternatives sources should not only be nutritive and suitable for large scale production at low cost, but also present good sensory characteristics. Therefore, this situation has influenced some industries to develop new food sources with competitive advantages, which require continuous innovation by generating and utilising new technologies and tools to create opportunities for new products, services, and industrial processes. Thus, this study aimed to optimise the production of gelatin-base gels from chicken feet by response surface methodology (RSM) and facilitate its sensorial classification by Kohonen’s self-organising maps (SOM). Herein, a 22 experimental design was developed by varying sugar and powdered collagen contents to obtain grape flavoured gelatin from chicken feet. The colour, flavour, aroma, and texture attributes of gelatines were evaluated by consumers according to a hedonic scale of 1–9 points. Least squares method was used to develop models relating the gelatin attributes with the sugar content and collagen mass, and their sensorial qualities were analysed and classified using the SOM algorithm. Results showed that all gelatin samples had an average above six hedonic points, implying that they had good consumer acceptance and can be marketed. Furthermore, gelatin D, with 3.65–3.80% (w/w) powdered collagen and 26.5–28.6% (w/w) sugar, was determined as the best. Thus, the SOM algorithm proved to be a useful computational tool for comparing sensory samples and identifying the best gelatin product.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Tonny J. Oyana ◽  
Luke E. K. Achenie ◽  
Joon Heo

The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen’s SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen’s SOM.


1999 ◽  
Vol 173 ◽  
pp. 243-248
Author(s):  
D. Kubáček ◽  
A. Galád ◽  
A. Pravda

AbstractUnusual short-period comet 29P/Schwassmann-Wachmann 1 inspired many observers to explain its unpredictable outbursts. In this paper large scale structures and features from the inner part of the coma in time periods around outbursts are studied. CCD images were taken at Whipple Observatory, Mt. Hopkins, in 1989 and at Astronomical Observatory, Modra, from 1995 to 1998. Photographic plates of the comet were taken at Harvard College Observatory, Oak Ridge, from 1974 to 1982. The latter were digitized at first to apply the same techniques of image processing for optimizing the visibility of features in the coma during outbursts. Outbursts and coma structures show various shapes.


1994 ◽  
Vol 144 ◽  
pp. 29-33
Author(s):  
P. Ambrož

AbstractThe large-scale coronal structures observed during the sporadically visible solar eclipses were compared with the numerically extrapolated field-line structures of coronal magnetic field. A characteristic relationship between the observed structures of coronal plasma and the magnetic field line configurations was determined. The long-term evolution of large scale coronal structures inferred from photospheric magnetic observations in the course of 11- and 22-year solar cycles is described.Some known parameters, such as the source surface radius, or coronal rotation rate are discussed and actually interpreted. A relation between the large-scale photospheric magnetic field evolution and the coronal structure rearrangement is demonstrated.


2000 ◽  
Vol 179 ◽  
pp. 205-208
Author(s):  
Pavel Ambrož ◽  
Alfred Schroll

AbstractPrecise measurements of heliographic position of solar filaments were used for determination of the proper motion of solar filaments on the time-scale of days. The filaments have a tendency to make a shaking or waving of the external structure and to make a general movement of whole filament body, coinciding with the transport of the magnetic flux in the photosphere. The velocity scatter of individual measured points is about one order higher than the accuracy of measurements.


Author(s):  
Simon Thomas

Trends in the technology development of very large scale integrated circuits (VLSI) have been in the direction of higher density of components with smaller dimensions. The scaling down of device dimensions has been not only laterally but also in depth. Such efforts in miniaturization bring with them new developments in materials and processing. Successful implementation of these efforts is, to a large extent, dependent on the proper understanding of the material properties, process technologies and reliability issues, through adequate analytical studies. The analytical instrumentation technology has, fortunately, kept pace with the basic requirements of devices with lateral dimensions in the micron/ submicron range and depths of the order of nonometers. Often, newer analytical techniques have emerged or the more conventional techniques have been adapted to meet the more stringent requirements. As such, a variety of analytical techniques are available today to aid an analyst in the efforts of VLSI process evaluation. Generally such analytical efforts are divided into the characterization of materials, evaluation of processing steps and the analysis of failures.


Sign in / Sign up

Export Citation Format

Share Document