Distributed Intelligence for Constructing Economic Models

2012 ◽  
pp. 1538-1550
Author(s):  
Ting Yu

This paper presents an integrated and distributed intelligent system being capable of automatically estimating and updating large-size economic models. The input-output model of economics uses a matrix representation of a nation’s (or a region’s) economy to predict the effect of changes in one industry on others and by consumers, government, and foreign suppliers on the economy (Miller & Blair, 1985). To construct the model reflecting the underlying industry structure faithfully, multiple sources of data are collected and integrated together. The system in this paper facilitates this estimation process by integrating a series of components with the purposes of data retrieval, data integration, machine learning, and quality checking. More importantly, the complexity of national economy leads to extremely large-size models to represent every detail of an economy, which requires the system to have the capacity for processing large amounts of data. This paper demonstrates that the major bottleneck is the memory allocation, and to include more memory, the machine learning component is built on a distributed platform and constructs the matrix by analyzing historical and spatial data simultaneously. This system is the first distributed matrix estimation package for such a large-size economic matrix.

Author(s):  
Ting Yu

This paper presents an integrated and distributed intelligent system being capable of automatically estimating and updating large-size economic models. The input-output model of economics uses a matrix representation of a nation’s (or a region’s) economy to predict the effect of changes in one industry on others and by consumers, government, and foreign suppliers on the economy (Miller & Blair, 1985). To construct the model reflecting the underlying industry structure faithfully, multiple sources of data are collected and integrated together. The system in this paper facilitates this estimation process by integrating a series of components with the purposes of data retrieval, data integration, machine learning, and quality checking. More importantly, the complexity of national economy leads to extremely large-size models to represent every detail of an economy, which requires the system to have the capacity for processing large amounts of data. This paper demonstrates that the major bottleneck is the memory allocation, and to include more memory, the machine learning component is built on a distributed platform and constructs the matrix by analyzing historical and spatial data simultaneously. This system is the first distributed matrix estimation package for such a large-size economic matrix.


Author(s):  
Ting Yu

This paper presents an integrated and distributed intelligent system being capable of automatically estimating and updating large-size economic models. The input-output model of economics uses a matrix representation of a nation’s (or a region’s) economy to predict the effect of changes in one industry on others and by consumers, government, and foreign suppliers on the economy (Miller & Blair, 1985). To construct the model reflecting the underlying industry structure faithfully, multiple sources of data are collected and integrated together. The system in this paper facilitates this estimation process by integrating a series of components with the purposes of data retrieval, data integration, machine learning, and quality checking. More importantly, the complexity of national economy leads to extremely large-size models to represent every detail of an economy, which requires the system to have the capacity for processing large amounts of data. This paper demonstrates that the major bottleneck is the memory allocation, and to include more memory, the machine learning component is built on a distributed platform and constructs the matrix by analyzing historical and spatial data simultaneously. This system is the first distributed matrix estimation package for such a large-size economic matrix.


Author(s):  
Ting Yu ◽  
Manfred Lenzen ◽  
Christopher Dey

Input-output table plays a central role in the Economic Input-Output Life Cycle Assessment (EIO-LCA) method. This chapter presents an integrated and distributed computational modeling system capable of estimating and updating large-size input-output tables. The complexity of national economy leads to extremely large-size models to represent every detail of an economy. In order to construct the table reflecting the underlying industry structure faithfully, multiple sources of data are integrated and analyzed together. The major bottleneck of matrix estimation is the lack of memory allocation. In order to include more memory, this unique distributed matrix estimation system runs over a parallel supercomputer to enable it to estimate a matrix with the size of more than 1,000-by-1,000 with relatively high accuracy. This system is the first distributed matrix estimation package for such a large-size economic matrix. This chapter presents a comprehensive example of facilitating this estimation process by integrating a series of components with the purposes of data retrieval, data integration, distributed machine learning, and quality checking.


2018 ◽  
Author(s):  
Daniel Cañueto ◽  
Miriam Navarro ◽  
Mónica Bulló ◽  
Xavier Correig ◽  
Nicolau Cañellas

AbstractThe quality of automatic metabolite profiling in NMR datasets in complex matrices can be compromised by the multiple sources of variability in the samples. These sources cause uncertainty in the metabolite signal parameters and the presence of multiple low-intensity signals. Lineshape fitting approaches might produce suboptimal resolutions or distort the fitted signals to adapt them to the complex spectrum lineshape. As a result, tools tend to restrict their use to specific matrices and strict protocols to reduce this uncertainty. However, the analysis and modelling of the signal parameters collected during a first profiling iteration can further reduce the uncertainty by the generation of narrow and accurate predictions of the expected signal parameters. In this study, we show that, thanks to the predictions generated, better profiling quality indicators can be outputted and the performance of automatic profiling can be maximized. Thanks to the ability of our workflow to learn and model the sample properties, restrictions in the matrix or protocol and limitations of lineshape fitting approaches can be overcome.


Author(s):  
R. A. Ricks ◽  
Angus J. Porter

During a recent investigation concerning the growth of γ' precipitates in nickel-base superalloys it was observed that the sign of the lattice mismatch between the coherent particles and the matrix (γ) was important in determining the ease with which matrix dislocations could be incorporated into the interface to relieve coherency strains. Thus alloys with a negative misfit (ie. the γ' lattice parameter was smaller than the matrix) could lose coherency easily and γ/γ' interfaces would exhibit regularly spaced networks of dislocations, as shown in figure 1 for the case of Nimonic 115 (misfit = -0.15%). In contrast, γ' particles in alloys with a positive misfit could grow to a large size and not show any such dislocation arrangements in the interface, thus indicating that coherency had not been lost. Figure 2 depicts a large γ' precipitate in Nimonic 80A (misfit = +0.32%) showing few interfacial dislocations.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


2021 ◽  
Vol 10 (2) ◽  
pp. 79
Author(s):  
Ching-Yun Mu ◽  
Tien-Yin Chou ◽  
Thanh Van Hoang ◽  
Pin Kung ◽  
Yao-Min Fang ◽  
...  

Spatial information technology has been widely used for vehicles in general and for fleet management. Many studies have focused on improving vehicle positioning accuracy, although few studies have focused on efficiency improvements for managing large truck fleets in the context of the current complex network of roads. Therefore, this paper proposes a multilayer-based map matching algorithm with different spatial data structures to deal rapidly with large amounts of coordinate data. Using the dimension reduction technique, the geodesic coordinates can be transformed into plane coordinates. This study provides multiple layer grouping combinations to deal with complex road networks. We integrated these techniques and employed a puncture method to process the geometric computation with spatial data-mining approaches. We constructed a spatial division index and combined this with the puncture method, which improves the efficiency of the system and can enhance data retrieval efficiency for large truck fleet dispatching. This paper also used a multilayer-based map matching algorithm with raster data structures. Comparing the results revealed that the look-up table method offers the best outcome. The proposed multilayer-based map matching algorithm using the look-up table method is suited to obtaining competitive performance in identifying efficiency improvements for large truck fleet dispatching.


2021 ◽  
Vol 13 (5) ◽  
pp. 907
Author(s):  
Theodora Lendzioch ◽  
Jakub Langhammer ◽  
Lukáš Vlček ◽  
Robert Minařík

One of the best preconditions for the sufficient monitoring of peat bog ecosystems is the collection, processing, and analysis of unique spatial data to understand peat bog dynamics. Over two seasons, we sampled groundwater level (GWL) and soil moisture (SM) ground truth data at two diverse locations at the Rokytka Peat bog within the Sumava Mountains, Czechia. These data served as reference data and were modeled with a suite of potential variables derived from digital surface models (DSMs) and RGB, multispectral, and thermal orthoimages reflecting topomorphometry, vegetation, and surface temperature information generated from drone mapping. We used 34 predictors to feed the random forest (RF) algorithm. The predictor selection, hyperparameter tuning, and performance assessment were performed with the target-oriented leave-location-out (LLO) spatial cross-validation (CV) strategy combined with forward feature selection (FFS) to avoid overfitting and to predict on unknown locations. The spatial CV performance statistics showed low (R2 = 0.12) to high (R2 = 0.78) model predictions. The predictor importance was used for model interpretation, where temperature had strong impact on GWL and SM, and we found significant contributions of other predictors, such as Normalized Difference Vegetation Index (NDVI), Normalized Difference Index (NDI), Enhanced Red-Green-Blue Vegetation Index (ERGBVE), Shape Index (SHP), Green Leaf Index (GLI), Brightness Index (BI), Coloration Index (CI), Redness Index (RI), Primary Colours Hue Index (HI), Overall Hue Index (HUE), SAGA Wetness Index (TWI), Plan Curvature (PlnCurv), Topographic Position Index (TPI), and Vector Ruggedness Measure (VRM). Additionally, we estimated the area of applicability (AOA) by presenting maps where the prediction model yielded high-quality results and where predictions were highly uncertain because machine learning (ML) models make predictions far beyond sampling locations without sampling data with no knowledge about these environments. The AOA method is well suited and unique for planning and decision-making about the best sampling strategy, most notably with limited data.


Author(s):  
Ernesto Dufrechou ◽  
Pablo Ezzatti ◽  
Enrique S Quintana-Ortí

More than 10 years of research related to the development of efficient GPU routines for the sparse matrix-vector product (SpMV) have led to several realizations, each with its own strengths and weaknesses. In this work, we review some of the most relevant efforts on the subject, evaluate a few prominent routines that are publicly available using more than 3000 matrices from different applications, and apply machine learning techniques to anticipate which SpMV realization will perform best for each sparse matrix on a given parallel platform. Our numerical experiments confirm the methods offer such varied behaviors depending on the matrix structure that the identification of general rules to select the optimal method for a given matrix becomes extremely difficult, though some useful strategies (heuristics) can be defined. Using a machine learning approach, we show that it is possible to obtain unexpensive classifiers that predict the best method for a given sparse matrix with over 80% accuracy, demonstrating that this approach can deliver important reductions in both execution time and energy consumption.


Author(s):  
Zuoshan Li

With the continuous progress of society, the level of science and technology of the country has made a leap forward development, the research energy of various industries on new science and technology continues to deepen, greatly promoting the promotion of science and technology. At the same time, with the increase in social pressure, more and more people pursue spiritual relaxation, and appropriate leisure and entertainment activities have gradually become a part of people’s life. Film plays an irreplaceable role in leisure and entertainment. Mainly from the background of the development of the film industry towards intelligent direction, and then use machine learning technology to study the application of film animation production and film virtual assets analysis and investigation. Based on the Internet of things technology, we also vigorously develop the ways and methods of visual expression of movies, and at the same time introduce new expression modes to promote the expression effect of the intelligent system. Finally, by comparing various algorithms in machine learning technology, the results of intelligent expression of random number forest algorithm in machine learning technology are more accurate. The system is also applied to 3D animation production to observe the measurement error of 3D motion data and facial expression data.


Sign in / Sign up

Export Citation Format

Share Document