successive refinement
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 20)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
Vol 14 (28) ◽  
pp. 67-78
Author(s):  
Alexandra Presser ◽  
Gilson Braviano ◽  
Eduardo Côrte-Real

There is a noticeable gap in academic studies between comic books and hypermedia. On the one hand, are found several publications on both printed and digital comic books. On the other hand, are publications aimed at media and technologies for content usability for small screen devices. Therefore, this study focuses on the development of comic books for small screen device reading. A parameter guide for the so-called Webtoons was developed, based on theoretical foundation, observation of webcomics in this style on content platforms, and 3 phases of qualitative field research. The research included interviews with comic artists, comic book professionals, and, seeking successive refinement, the guide's presentation to students as educational material.


2021 ◽  
pp. 000370282098784
Author(s):  
James Renwick Beattie ◽  
Francis Esmonde-White

Spectroscopy rapidly captures a large amount of data that is not directly interpretable. Principal Components Analysis (PCA) is widely used to simplify complex spectral datasets into comprehensible information by identifying recurring patterns in the data with minimal loss of information. The linear algebra underpinning PCA is not well understood by many applied analytical scientists and spectroscopists who use PCA. The meaning of features identified through PCA are often unclear. This manuscript traces the journey of the spectra themselves through the operations behind PCA, with each step illustrated by simulated spectra. PCA relies solely on the information within the spectra, consequently the mathematical model is dependent on the nature of the data itself. The direct links between model and spectra allow concrete spectroscopic explanation of PCA, such the scores representing ‘concentration’ or ‘weights’. The principal components (loadings) are by definition hidden, repeated and uncorrelated spectral shapes that linearly combine to generate the observed spectra. They can be visualized as subtraction spectra between extreme differences within the dataset. Each PC is shown to be a successive refinement of the estimated spectra, improving the fit between PC reconstructed data and the original data. Understanding the data-led development of a PCA model shows how to interpret application specific chemical meaning of the PCA loadings and how to analyze scores. A critical benefit of PCA is its simplicity and the succinctness of its description of a dataset, making it powerful and flexible.


2020 ◽  
Vol 68 (12) ◽  
pp. 7927-7937
Author(s):  
Meng Cheng ◽  
Wensheng Lin ◽  
Tad Matsumoto

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1179
Author(s):  
Nicola Catenacci Volpi ◽  
Daniel Polani

Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that this is the representation that is actually employed by biological organisms or that should be used by biologically plausible models. Instead, we operate by the assumption that biological organisms are constrained in their information processing capacities, which in the past has led to a number of insightful hypotheses and models for biologically plausible behaviour generation. Here we use this approach to study various types of spatial categorizations that emerge through such informational constraints imposed on embodied agents. We will see that geometrically-rich spatial representations emerge when agents employ a trade-off between the minimisation of the Shannon information used to describe locations within the environment and the reduction of the location error generated by the resulting approximate spatial description. In addition, agents do not always need to construct these representations from the ground up, but they can obtain them by refining less precise spatial descriptions constructed previously. Importantly, we find that these can be optimal at both steps of refinement, as guaranteed by the successive refinement principle from information theory. Finally, clusters induced by these spatial representations via the information bottleneck method are able to reflect the environment’s topology without relying on an explicit geometric description of the environment’s structure. Our findings suggest that the fundamental geometric notions possessed by natural agents do not need to be part of their a priori knowledge but could emerge as a byproduct of the pressure to process information parsimoniously.


Sign in / Sign up

Export Citation Format

Share Document