scholarly journals Data Visualization Classification Using Simple Convolutional Neural Network Model

Author(s):  
Filip Bajić ◽  
Josip Job ◽  
Krešimir Nenadić

Data visualization is developed from the need to display a vast quantity of information more transparently. Data visualization often incorporates important information that is not listed anywhere in the document and enables the reader to discover significant data and save it in longer-term memory. On the other hand, Internet search engines have difficulty processing data visualization and connecting visualization and the request submitted by the user. With the use of data visualization, all blind individuals and individuals with impaired vision are left out. This article utilizes machine learning to classify data visualizations into 10 classes. Tested model is trained four times on the dataset which is preprocessed through four stages. Achieved accuracy of 89 % is comparable to other methods’ results. It is showed that image processing can impact results, i.e. increasing or decreasing level of details in image impacts on average classification accuracy significantly.

Author(s):  
Ileana Baird

AbstractThis introduction provides a brief survey of the evolution of data visualization from its eighteenth-century beginnings, when the Scottish engineer and political scientist William Playfair created the first statistical graphs, to its present-day developments and use in period-related digital humanities projects. The author highlights the growing use of data visualization in major institutional projects, provides a literature review of representative works that employ data visualizations as a methodological tool, and highlights the contribution that this collection makes to digital humanities and the Enlightenment studies. Addressing essential period-related themes—from issues of canonicity, intellectual history, and book trade practices to canonical authors and texts, gender roles, and public sphere dynamics—, this collection also makes a broader argument about the necessity of expanding the very notion of “Enlightenment” not only spatially but also conceptually, by revisiting its tenets in light of new data. When translating the new findings afforded by the digital in suggestive visualizations, we can unveil unforeseen patterns, trends, connections, or networks of influence that could potentially revise existing master narratives about the period and the ideological structures at the core of the Enlightenment.


Author(s):  
Salla-Maaria Laaksonen ◽  
Juho Pääkkönen

This chapter explores the use of data visualizations in social media analytics companies. Drawing on a dataset of ethnographic field notes and thematic interviews in four Finnish social media analytics companies, we argue that data visualizations are crucially involved in how analytics-based knowledge claims become accepted by companies and their clients. Basing on previous research on visualizations in organizations and as a representational practice, we explore their role in social media analytics. We identify three practices of using visualizations, which we have named have simple-boxing, flatter-boxing, and pretty-boxing. We argue that these practices enable analysts to achieve the simultaneous aims of producing credible and valuable analytics in a context marked by high business promises.


2020 ◽  
Vol 25 (3) ◽  
pp. 554-566
Author(s):  
Amit Kumar ◽  
Poonam Gaur

The advancing technology is affecting every aspect of life and journalism is also not untouched by this. Due to digitalization, huge amount of data is being generated and the continuous advancement of computer science has made it possible to extract meaningful information by storing and analysing this huge data. The term “data journalism” has become quite popular over the last decade. Analysing data sets, extracting newsworthy information from it and passing it on to the public is data journalism. Data visualization also has a very important place in this whole process. Data visualization is used to communicate information extracted from the data to the users in a clear, interesting and engaging way. The amount of data-based content has started increasing in the news media as well, so the importance of data visualization has also increased. The use of data visualization improves readers’ reading experience and also helps to better understand the data-based content. This preliminary study focuses on the use of data visualizations by English and Hindi newspapers in India. In this research, a comparative study of various aspects of the use of data visualizations in English and Hindi newspapers has been done. Content analysis with quantitative approach has been employed as the research method. This study reveals that there is a big difference in every aspect of the use of data visualizations in English and Hindi newspapers. English newspaper used data visualizations in a better way than their Hindi counterpart.


Author(s):  
Richard Schaefer

This essay posits that now is a particularly propitious time for the development and use of data visualization as a means for communicating abstract baseline information about society’s complex institutions, organizations, and social structures. It reviews recent developments in off- the-shelf visualization software and describes supporting literature and tutorials. Finally, it presents some of the ethical dilemmas and constraints confronting visualization producers.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 231
Author(s):  
Weiheng Jiang ◽  
Xiaogang Wu ◽  
Yimou Wang ◽  
Bolin Chen ◽  
Wenjiang Feng ◽  
...  

Blind modulation classification is an important step in implementing cognitive radio networks. The multiple-input multiple-output (MIMO) technique is widely used in military and civil communication systems. Due to the lack of prior information about channel parameters and the overlapping of signals in MIMO systems, the traditional likelihood-based and feature-based approaches cannot be applied in these scenarios directly. Hence, in this paper, to resolve the problem of blind modulation classification in MIMO systems, the time–frequency analysis method based on the windowed short-time Fourier transform was used to analyze the time–frequency characteristics of time-domain modulated signals. Then, the extracted time–frequency characteristics are converted into red–green–blue (RGB) spectrogram images, and the convolutional neural network based on transfer learning was applied to classify the modulation types according to the RGB spectrogram images. Finally, a decision fusion module was used to fuse the classification results of all the receiving antennas. Through simulations, we analyzed the classification performance at different signal-to-noise ratios (SNRs); the results indicate that, for the single-input single-output (SISO) network, our proposed scheme can achieve 92.37% and 99.12% average classification accuracy at SNRs of −4 and 10 dB, respectively. For the MIMO network, our scheme achieves 80.42% and 87.92% average classification accuracy at −4 and 10 dB, respectively. The proposed method greatly improves the accuracy of modulation classification in MIMO networks.


2021 ◽  
Vol 11 (10) ◽  
pp. 4614
Author(s):  
Xiaofei Chao ◽  
Xiao Hu ◽  
Jingze Feng ◽  
Zhao Zhang ◽  
Meili Wang ◽  
...  

The fast and accurate identification of apple leaf diseases is beneficial for disease control and management of apple orchards. An improved network for apple leaf disease classification and a lightweight model for mobile terminal usage was designed in this paper. First, we proposed SE-DEEP block to fuse the Squeeze-and-Excitation (SE) module with the Xception network to get the SE_Xception network, where the SE module is inserted between the depth-wise convolution and point-wise convolution of the depth-wise separable convolution layer. Therefore, the feature channels from the lower layers could be directly weighted, which made the model more sensitive to the principal features of the classification task. Second, we designed a lightweight network, named SE_miniXception, by reducing the depth and width of SE_Xception. Experimental results show that the average classification accuracy of SE_Xception is 99.40%, which is 1.99% higher than Xception. The average classification accuracy of SE_miniXception is 97.01%, which is 1.60% and 1.22% higher than MobileNetV1 and ShuffleNet, respectively, while its number of parameters is less than those of MobileNet and ShuffleNet. The minimized network decreases the memory usage and FLOPs, and accelerates the recognition speed from 15 to 7 milliseconds per image. Our proposed SE-DEEP block provides a choice for improving network accuracy and our network compression scheme provides ideas to lightweight existing networks.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huiping Jiang ◽  
Demeng Wu ◽  
Rui Jiao ◽  
Zongnan Wang

Electroencephalography (EEG) is the measurement of neuronal activity in different areas of the brain through the use of electrodes. As EEG signal technology has matured over the years, it has been applied in various methods to EEG emotion recognition, most significantly including the use of convolutional neural network (CNN). However, these methods are still not ideal, and shortcomings have been found in the results of some models of EEG feature extraction and classification. In this study, two CNN models were selected for the extraction and classification of preprocessed data, namely, common spatial patterns- (CSP-) CNN and wavelet transform- (WT-) CNN. Using the CSP-CNN, we first used the common space model to reduce dimensionality and then applied the CNN directly to extract and classify the features of the EEG; while, with the WT-CNN model, we used the wavelet transform to extract EEG features, thereafter applying the CNN for classification. The EEG classification results of these two classification models were subsequently analyzed and compared, with the average classification accuracy of the CSP-CNN model found to be 80.56%, and the average classification accuracy of the WT-CNN model measured to 86.90%. Thus, the findings of this study show that the average classification accuracy of the WT-CNN model was 6.34% higher than that of the CSP-CNN.


Author(s):  
Cara Murphy ◽  
John Kerekes

The classification of trace chemical residues through active spectroscopic sensing is challenging due to the lack of physics-based models that can accurately predict spectra. To overcome this challenge, we leveraged the field of domain adaptation to translate data from the simulated to the measured domain for training a classifier. We developed the first 1D conditional generative adversarial network (GAN) to perform spectrum-to-spectrum translation of reflectance signatures. We applied the 1D conditional GAN to a library of simulated spectra and quantified the improvement in classification accuracy on real data using the translated spectra for training the classifier. Using the GAN-translated library, the average classification accuracy increased from 0.622 to 0.723 on real chemical reflectance data, including data from chemicals not included in the GAN training set.


2019 ◽  
Author(s):  
Jean-Philippe Corbeil ◽  
Florent Daudens ◽  
Thomas Hurtut

This visual case study is conducted by Le Devoir, a Canadian french-language independent daily newspaper gathering around 50 journalists and one million readers every week. During the past twelve months, in collaboration with Polytechnique Montreal, we investigated a scrollytelling format strongly relying on combined series of data visualizations. This visual case study will specifically present one of the news stories we published, which communicates electoral results the day after the last Quebec general election. It gathers all the lessons that we learnt from this experience, the challenges that we tackled and the perspectives for the future. Beyond the specific electoral context of this work, these conclusions might be useful for any practitioner willing to communicate data visualization based stories, using a scrollytelling narrative format.


Sign in / Sign up

Export Citation Format

Share Document