Weak Coverage Area Detection Algorithms for Intelligent Networks Based on Large Data

Author(s):  
Ying-jian Kang ◽  
Lei Ma ◽  
Ge-cui Gong
2019 ◽  
Vol 11 (19) ◽  
pp. 5508
Author(s):  
Huang ◽  
Kelly ◽  
Lu ◽  
Lv ◽  
Shi ◽  
...  

With China’s commitment to peak its emissions by 2030, sectoral emissions are under the spotlight due to the rolling out of the national emission trading scheme (ETS). However, the current sector policies focus either on the production side or consumption while the majority of sectors along the transmission were overlooked. This research combines input–output modelling and network analysis to track the embodied carbon emissions among thirty sectors of thirty provinces in China. Based on the large-data resolution network, a two-step network reduction algorithm is used to extract the backbone of the network. In addition, network centrality metrics and community detection algorithms are used to assess each individual sector’s roles, and to reveal the carbon communities where sectors have intensive emission links. The research results suggest that the sectors with high out-degree, in-degree or betweenness can act as leverage points for carbon emissions mitigation. In addition to the electricity sector, which is included in the national ETS, the study also found that the metallurgy and construction sectors should be prioritized for emissions reduction from national and local levels. However, the hotpots are different across provinces and thus provincial specific targeted policies should be formed. Moreover, there are nineteen carbon communities in China with different features, which provides direction for provincial governments’ external collaboration for synergistic effects.


Molecules ◽  
2019 ◽  
Vol 24 (20) ◽  
pp. 3757 ◽  
Author(s):  
Joshua Morimoto ◽  
Marta Cialiè Rosso ◽  
Nicole Kfoury ◽  
Carlo Bicchi ◽  
Chiara Cordero ◽  
...  

Identifying all analytes in a natural product is a daunting challenge, even if fractionated by volatility. In this study, comprehensive two-dimensional gas chromatography/mass spectrometry (GC×GC-MS) was used to investigate relative distribution of volatiles in green, pu-erh tea from leaves collected at two different elevations (1162 m and 1651 m). A total of 317 high and 280 low elevation compounds were detected, many of them known to have sensory and health beneficial properties. The samples were evaluated by two different software. The first, GC Image, used feature-based detection algorithms to identify spectral patterns and peak-regions, leading to tentative identification of 107 compounds. The software produced a composite map illustrating differences in the samples. The second, Ion Analytics, employed spectral deconvolution algorithms to detect target compounds, then subtracted their spectra from the total ion current chromatogram to reveal untargeted compounds. Compound identities were more easily assigned, since chromatogram complexities were reduced. Of the 317 compounds, for example, 34% were positively identified and 42% were tentatively identified, leaving 24% as unknowns. This study demonstrated the targeted/untargeted approach taken simplifies the analysis time for large data sets, leading to a better understanding of the chemistry behind biological phenomena.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Hakan Ancin

This paper presents methods for performing detailed quantitative automated three dimensional (3-D) analysis of cell populations in thick tissue sections while preserving the relative 3-D locations of cells. Specifically, the method disambiguates overlapping clusters of cells, and accurately measures the volume, 3-D location, and shape parameters for each cell. Finally, the entire population of cells is analyzed to detect patterns and groupings with respect to various combinations of cell properties. All of the above is accomplished with zero subjective bias.In this method, a laser-scanning confocal light microscope (LSCM) is used to collect optical sections through the entire thickness (100 - 500μm) of fluorescently-labelled tissue slices. The acquired stack of optical slices is first subjected to axial deblurring using the expectation maximization (EM) algorithm. The resulting isotropic 3-D image is segmented using a spatially-adaptive Poisson based image segmentation algorithm with region-dependent smoothing parameters. Extracting the voxels that were labelled as "foreground" into an active voxel data structure results in a large data reduction.


1980 ◽  
Vol 19 (04) ◽  
pp. 187-194
Author(s):  
J.-Ph. Berney ◽  
R. Baud ◽  
J.-R. Scherrer

It is well known that Frame Selection Systems (FFS) have proved both popular and effective in physician-machine and patient-machine dialogue. A formal algorithm for definition of a Frame Selection System for handling man-machine dialogue is presented here. Besides, it is shown how the natural medical language can be handled using the approach of a tree branching logic. This logic appears to be based upon ordered series of selections which enclose a syntactic structure. The external specifications are discussed with regard to convenience and efficiency. Knowing that all communication between the user and the application programmes is handled only by FSS software, FSS contributes to achieving modularity and, therefore, also maintainability in a transaction-oriented system with a large data base and concurrent accesses.


2020 ◽  
Vol 39 (5) ◽  
pp. 6419-6430
Author(s):  
Dusan Marcek

To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.


2017 ◽  
Vol 5 (1) ◽  
pp. 63
Author(s):  
Maria Ulfah
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document