scholarly journals VizAI : Selecting Accurate Visualizations of Numerical Data

2022 ◽  
Author(s):  
Ritvik Vij ◽  
Rohit Raj ◽  
Madhur Singhal ◽  
Manish Tanwar ◽  
Srikanta Bedathur
Keyword(s):  
Author(s):  
W.M. Stobbs

I do not have access to the abstracts of the first meeting of EMSA but at this, the 50th Anniversary meeting of the Electron Microscopy Society of America, I have an excuse to consider the historical origins of the approaches we take to the use of electron microscopy for the characterisation of materials. I have myself been actively involved in the use of TEM for the characterisation of heterogeneities for little more than half of that period. My own view is that it was between the 3rd International Meeting at London, and the 1956 Stockholm meeting, the first of the European series , that the foundations of the approaches we now take to the characterisation of a material using the TEM were laid down. (This was 10 years before I took dynamical theory to be etched in stone.) It was at the 1956 meeting that Menter showed lattice resolution images of sodium faujasite and Hirsch, Home and Whelan showed images of dislocations in the XlVth session on “metallography and other industrial applications”. I have always incidentally been delighted by the way the latter authors misinterpreted astonishingly clear thickness fringes in a beaten (”) foil of Al as being contrast due to “large strains”, an error which they corrected with admirable rapidity as the theory developed. At the London meeting the research described covered a broad range of approaches, including many that are only now being rediscovered as worth further effort: however such is the power of “the image” to persuade that the above two papers set trends which influence, perhaps too strongly, the approaches we take now. Menter was clear that the way the planes in his image tended to be curved was associated with the imaging conditions rather than with lattice strains, and yet it now seems to be common practice to assume that the dots in an “atomic resolution image” can faithfully represent the variations in atomic spacing at a localised defect. Even when the more reasonable approach is taken of matching the image details with a computed simulation for an assumed model, the non-uniqueness of the interpreted fit seems to be rather rarely appreciated. Hirsch et al., on the other hand, made a point of using their images to get numerical data on characteristics of the specimen they examined, such as its dislocation density, which would not be expected to be influenced by uncertainties in the contrast. Nonetheless the trends were set with microscope manufacturers producing higher and higher resolution microscopes, while the blind faith of the users in the image produced as being a near directly interpretable representation of reality seems to have increased rather than been generally questioned. But if we want to test structural models we need numbers and it is the analogue to digital conversion of the information in the image which is required.


Author(s):  
B. Lencova ◽  
G. Wisselink

Recent progress in computer technology enables the calculation of lens fields and focal properties on commonly available computers such as IBM ATs. If we add to this the use of graphics, we greatly increase the applicability of design programs for electron lenses. Most programs for field computation are based on the finite element method (FEM). They are written in Fortran 77, so that they are easily transferred from PCs to larger machines.The design process has recently been made significantly more user friendly by adding input programs written in Turbo Pascal, which allows a flexible implementation of computer graphics. The input programs have not only menu driven input and modification of numerical data, but also graphics editing of the data. The input programs create files which are subsequently read by the Fortran programs. From the main menu of our magnetic lens design program, further options are chosen by using function keys or numbers. Some options (lens initialization and setting, fine mesh, current densities, etc.) open other menus where computation parameters can be set or numerical data can be entered with the help of a simple line editor. The "draw lens" option enables graphical editing of the mesh - see fig. I. The geometry of the electron lens is specified in terms of coordinates and indices of a coarse quadrilateral mesh. In this mesh, the fine mesh with smoothly changing step size is calculated by an automeshing procedure. The options shown in fig. 1 allow modification of the number of coarse mesh lines, change of coordinates of mesh points or lines, and specification of lens parts. Interactive and graphical modification of the fine mesh can be called from the fine mesh menu. Finally, the lens computation can be called. Our FEM program allows up to 8000 mesh points on an AT computer. Another menu allows the display of computed results stored in output files and graphical display of axial flux density, flux density in magnetic parts, and the flux lines in magnetic lenses - see fig. 2. A series of several lens excitations with user specified or default magnetization curves can be calculated and displayed in one session.


2020 ◽  
Vol 22 (4) ◽  
pp. 1439-1452
Author(s):  
Mohamed L. Benlekkam ◽  
Driss Nehari ◽  
Habib Y. Madani

AbstractThe temperature rise of photovoltaic’s cells deteriorates its conversion efficiency. The use of a phase change material (PCM) layer linked to a curved photovoltaic PV panel so-called PV-mirror to control its temperature elevation has been numerically studied. This numerical study was carried out to explore the effect of inner fins length on the thermal and electrical improvement of curved PV panel. So a numerical model of heat transfer with solid-liquid phase change has been developed to solve the Navier–Stokes and energy equations. The predicted results are validated with an available experimental and numerical data. Results shows that the use of fins improve the thermal load distribution presented on the upper front of PV/PCM system and maintained it under 42°C compared with another without fins and enhance the PV cells efficiency by more than 2%.


2018 ◽  
Author(s):  
Glyn Kennell ◽  
Richard Evitts

The presented simulated data compares concentration gradients and electric fields with experimental and numerical data of others. This data is simulated for cases involving liquid junctions and electrolytic transport. The objective of presenting this data is to support a model and theory. This theory demonstrates the incompatibility between conventional electrostatics inherent in Maxwell's equations with conventional transport equations. <br>


2020 ◽  
Vol 13 (5) ◽  
pp. 1020-1030
Author(s):  
Pradeep S. ◽  
Jagadish S. Kallimani

Background: With the advent of data analysis and machine learning, there is a growing impetus of analyzing and generating models on historic data. The data comes in numerous forms and shapes with an abundance of challenges. The most sorted form of data for analysis is the numerical data. With the plethora of algorithms and tools it is quite manageable to deal with such data. Another form of data is of categorical nature, which is subdivided into, ordinal (order wise) and nominal (number wise). This data can be broadly classified as Sequential and Non-Sequential. Sequential data analysis is easier to preprocess using algorithms. Objective: The challenge of applying machine learning algorithms on categorical data of nonsequential nature is dealt in this paper. Methods: Upon implementing several data analysis algorithms on such data, we end up getting a biased result, which makes it impossible to generate a reliable predictive model. In this paper, we will address this problem by walking through a handful of techniques which during our research helped us in dealing with a large categorical data of non-sequential nature. In subsequent sections, we will discuss the possible implementable solutions and shortfalls of these techniques. Results: The methods are applied to sample datasets available in public domain and the results with respect to accuracy of classification are satisfactory. Conclusion: The best pre-processing technique we observed in our research is one hot encoding, which facilitates breaking down the categorical features into binary and feeding it into an Algorithm to predict the outcome. The example that we took is not abstract but it is a real – time production services dataset, which had many complex variations of categorical features. Our Future work includes creating a robust model on such data and deploying it into industry standard applications.


1996 ◽  
Vol 61 (9) ◽  
pp. 1267-1284
Author(s):  
Ondřej Wein

Response of an electrodiffusion friction sensor to a finite step of the wall shear rate is studied by numerically solving the relevant mass-transfer problem. The resulting numerical data on transient currents are treated further to provide reasonably accurate analytical representations. Existing approximations to the general response operator are checked by using the obtained exact solution.


Author(s):  
Jacqueline M. Dewar

Chapter 4 provides an introduction to gathering data for scholarship of teaching and learning (SoTL) investigations, including the importance of triangulation, that is, collecting several different types of evidence. Examples are given of typical kinds of quantitative (numerical) and qualitative (non-numerical) data that might be used in a SoTL study. That quantitative and qualitative data are more closely related than it might seem at first is discussed. The taxonomy of SoTL questions—What works? What is? What could be?—provides a starting point for considering what type of data to collect. Suggestions are offered for ways to design assignments so that the coursework students produce can also serve as evidence, something that benefits both students and their instructor.


Sign in / Sign up

Export Citation Format

Share Document