Interactive Visualization of 3D Segmented Data Sets Using Simplified LH Histogram Based Transfer Function

Author(s):  
Na Ding ◽  
Kuanquan Wang ◽  
Wangmeng Zuo ◽  
Fei Yang



Solar Physics ◽  
2021 ◽  
Vol 296 (1) ◽  
Author(s):  
V. Courtillot ◽  
F. Lopes ◽  
J. L. Le Mouël

AbstractThis article deals with the prediction of the upcoming solar activity cycle, Solar Cycle 25. We propose that astronomical ephemeris, specifically taken from the catalogs of aphelia of the four Jovian planets, could be drivers of variations in solar activity, represented by the series of sunspot numbers (SSN) from 1749 to 2020. We use singular spectrum analysis (SSA) to associate components with similar periods in the ephemeris and SSN. We determine the transfer function between the two data sets. We improve the match in successive steps: first with Jupiter only, then with the four Jovian planets and finally including commensurable periods of pairs and pairs of pairs of the Jovian planets (following Mörth and Schlamminger in Planetary Motion, Sunspots and Climate, Solar-Terrestrial Influences on Weather and Climate, 193, 1979). The transfer function can be applied to the ephemeris to predict future cycles. We test this with success using the “hindcast prediction” of Solar Cycles 21 to 24, using only data preceding these cycles, and by analyzing separately two 130 and 140 year-long halves of the original series. We conclude with a prediction of Solar Cycle 25 that can be compared to a dozen predictions by other authors: the maximum would occur in 2026.2 (± 1 yr) and reach an amplitude of 97.6 (± 7.8), similar to that of Solar Cycle 24, therefore sketching a new “Modern minimum”, following the Dalton and Gleissberg minima.



1998 ◽  
Vol 84 (1-2) ◽  
pp. 143-154 ◽  
Author(s):  
Klaudia Lohmann ◽  
Eckart D Gundelfinger ◽  
Henning Scheich ◽  
Rita Grimm ◽  
Wolfgang Tischmeyer ◽  
...  




2018 ◽  
Vol 7 (3.12) ◽  
pp. 239
Author(s):  
Chitransh Rajesh ◽  
Yash Jain ◽  
J Jayapradha

Data Analytics is the process of analyzing unprocessed data to draw conclusions by studying and inspecting various patterns in the data. Several algorithms and conceptual methods are often followed to derive legit and accurate results. Efficient data handling is important for interactive visualization of data sets. Considering recent researches and analytical theories on column-oriented Database Management System, we are developing a new data engine using R and Tableau to predict airport trends. The engine uses Univariate datasets (Example, Perth Airport Passenger Movement Dataset, and Newark Airport Cargo Stats Dataset) to analyze and predict accurate trends. Data analyzing and prediction is done with the implementation of Time Series Analysis and respective ARIMA Models for respective modules. Development of modules is done using RStudio whereas Tableau is used for interactive visualization and end-user report generation. The Airport Trends Analytics Engine is an integral part of R and Tableau 10.4 and is optimized for use on desktop and server environments.  



2006 ◽  
Vol 18 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Walter W. Focke

A modified version of the single hidden-layer perceptron architecture is proposed for modeling mixtures. A particular flexible mixture model is obtained by implementing the Box-Cox transformation as transfer function. In this case, the network response can be expressed in closed form as a weighted power mean. The quadratic Scheffé K-polynomial and the exponential Wilson equation turn out to be special forms of this general mixture model. Advantages of the proposed network architecture are that binary data sets suffice for “training” and that it is readily extended to incorporate additional mixture components while retaining all previously determined weights.



1978 ◽  
Vol 9 (1) ◽  
pp. 87-112 ◽  
Author(s):  
William Halsey Hutson

The distribution and abundance of planktonic Foraminifera from the Indian Ocean are used to illustrate geographic variations in faunal assemblages in the plankton and on the seabed caused by sedimentary and postdepositional processes and to analyze the effect of these variations on paleoecological reconstruction. Principal components analysis of these data describes the composition and distribution of faunal assemblages in plankton-tow samples, low-dissolution core-top samples, and high-dissolution core-top samples. Factor-comparison analysis describes the relationships among these three sets of assemblages: The species composition of low-dissolution faunal assemblages may be accurately described as a simple linear mixing of plankton assemblages. The geographical distributions of the faunal assemblages in the sediments, however, are often displaced equatorward of their counterparts in the plankton. Dissolution causes complex changes in the composition of faunal assemblages and produces an equatorward displacement of several high-dissolution assemblages relative to their counterparts in low-dissolution sediments. Three transfer functions, or equations, are derived using plankton, low-dissolution, and high-dissolution data. Numerical experiments indicate that transfer functions lose accuracy when applied to discordant data sets: The plankton transfer function often underestimates temperatures in core-top sediments, and the low-dissolution transfer function underestimates temperatures in high-dissolution sediments. These systematic differences in temperature estimates are illustrated by applying the three transfer functions to downcore samples representing conditions 18,000 years ago. Other experiments indicate that these distortions can be reduced by using larger size fractions and calibrating transfer functions with both low- and high-dissolution core-top samples.



Author(s):  
Xin Yan ◽  
Mu Qiao ◽  
Timothy W. Simpson ◽  
Jia Li ◽  
Xiaolong Luke Zhang

During the process of trade space exploration, information overload has become a notable problem. To find the best design, designers need more efficient tools to analyze the data, explore possible hidden patterns, and identify preferable solutions. When dealing with large-scale, multi-dimensional, continuous data sets (e.g., design alternatives and potential solutions), designers can be easily overwhelmed by the volume and complexity of the data. Traditional information visualization tools have some limits to support the analysis and knowledge exploration of such data, largely because they usually emphasize the visual presentation of and user interaction with data sets, and lack the capacity to identify hidden data patterns that are critical to in-depth analysis. There is a need for the integration of user-centered visualization designs and data-oriented data analysis algorithms in support of complex data analysis. In this paper, we present a work-centered approach to support visual analytics of multi-dimensional engineering design data by combining visualization, user interaction, and computational algorithms. We describe a system, Learning-based Interactive Visualization for Engineering design (LIVE), that allows designer to interactively examine large design input data and performance output data analysis simultaneously through visualization. We expect that our approach can help designers analyze complex design data more efficiently and effectively. We report our preliminary evaluation on the use of our system in analyzing a design problem related to aircraft wing sizing.



2009 ◽  
Vol 28 (8) ◽  
pp. 2165-2175 ◽  
Author(s):  
N. Cuntz ◽  
A. Pritzkau ◽  
A. Kolb


Sign in / Sign up

Export Citation Format

Share Document