The time dimension: the input phase

2009 ◽  
pp. 121-128
Author(s):  
Graham Thornicroft ◽  
Michele Tansella
Keyword(s):  
Author(s):  
S. Chef ◽  
C. T. Chua ◽  
C. L. Gan

Abstract Limited spatial resolution and low signal to noise ratio are some of the main challenges in optical signal observation, especially for photon emission microscopy. As dynamic emission signals are generated in a 3D space, the use of the time dimension in addition to space enables a better localization of switching events. It can actually be used to infer information with a precision above the resolution limits of the acquired signals. Taking advantage of this property, we report on a post-acquisition processing scheme to generate emission images with a better image resolution than the initial acquisition.


2021 ◽  
Vol 11 (14) ◽  
pp. 6625
Author(s):  
Yan Su ◽  
Kailiang Weng ◽  
Chuan Lin ◽  
Zeqin Chen

An accurate dam deformation prediction model is vital to a dam safety monitoring system, as it helps assess and manage dam risks. Most traditional dam deformation prediction algorithms ignore the interpretation and evaluation of variables and lack qualitative measures. This paper proposes a data processing framework that uses a long short-term memory (LSTM) model coupled with an attention mechanism to predict the deformation response of a dam structure. First, the random forest (RF) model is introduced to assess the relative importance of impact factors and screen input variables. Secondly, the density-based spatial clustering of applications with noise (DBSCAN) method is used to identify and filter the equipment based abnormal values to reduce the random error in the measurements. Finally, the coupled model is used to focus on important factors in the time dimension in order to obtain more accurate nonlinear prediction results. The results of the case study show that, of all tested methods, the proposed coupled method performed best. In addition, it was found that temperature and water level both have significant impacts on dam deformation and can serve as reliable metrics for dam management.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 626
Author(s):  
Ramya Gupta ◽  
Abhishek Prasad ◽  
Suresh Babu ◽  
Gitanjali Yadav

A global event such as the COVID-19 crisis presents new, often unexpected responses that are fascinating to investigate from both scientific and social standpoints. Despite several documented similarities, the coronavirus pandemic is clearly distinct from the 1918 flu pandemic in terms of our exponentially increased, almost instantaneous ability to access/share information, offering an unprecedented opportunity to visualise rippling effects of global events across space and time. Personal devices provide “big data” on people’s movement, the environment and economic trends, while access to the unprecedented flurry in scientific publications and media posts provides a measure of the response of the educated world to the crisis. Most bibliometric (co-authorship, co-citation, or bibliographic coupling) analyses ignore the time dimension, but COVID-19 has made it possible to perform a detailed temporal investigation into the pandemic. Here, we report a comprehensive network analysis based on more than 20,000 published documents on viral epidemics, authored by over 75,000 individuals from 140 nations in the past one year of the crisis. Unlike the 1918 flu pandemic, access to published data over the past two decades enabled a comparison of publishing trends between the ongoing COVID-19 pandemic and those of the 2003 SARS epidemic to study changes in thematic foci and societal pressures dictating research over the course of a crisis.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ruichao Zhu ◽  
Tianshuo Qiu ◽  
Jiafu Wang ◽  
Sai Sui ◽  
Chenglong Hao ◽  
...  

AbstractMetasurfaces have provided unprecedented freedom for manipulating electromagnetic waves. In metasurface design, massive meta-atoms have to be optimized to produce the desired phase profiles, which is time-consuming and sometimes prohibitive. In this paper, we propose a fast accurate inverse method of designing functional metasurfaces based on transfer learning, which can generate metasurface patterns monolithically from input phase profiles for specific functions. A transfer learning network based on GoogLeNet-Inception-V3 can predict the phases of 28×8 meta-atoms with an accuracy of around 90%. This method is validated via functional metasurface design using the trained network. Metasurface patterns are generated monolithically for achieving two typical functionals, 2D focusing and abnormal reflection. Both simulation and experiment verify the high design accuracy. This method provides an inverse design paradigm for fast functional metasurface design, and can be readily used to establish a meta-atom library with full phase span.


1976 ◽  
Vol 123 (10) ◽  
pp. 1017
Author(s):  
U.S. Hazra ◽  
S.K. Basu ◽  
S. Chowdhuri

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 956
Author(s):  
Dafne Carolina Arias-Perdomo ◽  
Adriano Cherchiglia ◽  
Brigitte Hiller ◽  
Marcos Sampaio

Quantum Field Theory, as the keystone of particle physics, has offered great insights into deciphering the core of Nature. Despite its striking success, by adhering to local interactions, Quantum Field Theory suffers from the appearance of divergent quantities in intermediary steps of the calculation, which encompasses the need for some regularization/renormalization prescription. As an alternative to traditional methods, based on the analytic extension of space–time dimension, frameworks that stay in the physical dimension have emerged; Implicit Regularization is one among them. We briefly review the method, aiming to illustrate how Implicit Regularization complies with the BPHZ theorem, which implies that it respects unitarity and locality to arbitrary loop order. We also pedagogically discuss how the method complies with gauge symmetry using one- and two-loop examples in QED and QCD.


2019 ◽  
Vol 53 (1) ◽  
pp. 38-39
Author(s):  
Anjie Fang

Recently, political events, such as elections, have raised a lot of discussions on social media networks, in particular, Twitter. This brings new opportunities for social scientists to address social science tasks, such as understanding what communities said or identifying whether a community has an influence on another. However, identifying these communities and extracting what they said from social media data are challenging and non-trivial tasks. We aim to make progress towards understanding 'who' (i.e. communities) said 'what' (i.e. discussed topics) and 'when' (i.e. time) during political events on Twitter. While identifying the 'who' can benefit from Twitter user community classification approaches, 'what' they said and 'when' can be effectively addressed on Twitter by extracting their discussed topics using topic modelling approaches that also account for the importance of time on Twitter. To evaluate the quality of these topics, it is necessary to investigate how coherent these topics are to humans. Accordingly, we propose a series of approaches in this thesis. First, we investigate how to effectively evaluate the coherence of the topics generated using a topic modelling approach. The topic coherence metric evaluates the topical coherence by examining the semantic similarity among words in a topic. We argue that the semantic similarity of words in tweets can be effectively captured by using word embeddings trained using a Twitter background dataset. Through a user study, we demonstrate that our proposed word embedding-based topic coherence metric can assess the coherence of topics like humans [1, 2]. In addition, inspired by the precision at k metric, we propose to evaluate the coherence of a topic model (containing many topics) by averaging the top-ranked topics within the topic model [3]. Our proposed metrics can not only evaluate the coherence of topics and topic models, but also can help users to choose the most coherent topics. Second, we aim to extract topics with a high coherence from Twitter data. Such topics can be easily interpreted by humans and they can assist to examine 'what' has been discussed and 'when'. Indeed, we argue that topics can be discussed in different time periods (see [4]) and therefore can be effectively identified and distinguished by considering their time periods. Hence, we propose an effective time-sensitive topic modelling approach by integrating the time dimension of tweets (i.e. 'when') [5]. We show that the time dimension helps to generate topics with a high coherence. Hence, we argue that 'what' has been discussed and 'when' can be effectively addressed by our proposed time-sensitive topic modelling approach. Next, to identify 'who' participated in the topic discussions, we propose approaches to identify the community affiliations of Twitter users, including automatic ground-truth generation approaches and a user community classification approach. We show that the mentioned hashtags and entities in the users' tweets can indicate which community a Twitter user belongs to. Hence, we argue that they can be used to generate the ground-truth data for classifying users into communities. On the other hand, we argue that different communities favour different topic discussions and their community affiliations can be identified by leveraging the discussed topics. Accordingly, we propose a Topic-Based Naive Bayes (TBNB) classification approach to classify Twitter users based on their words and discussed topics [6]. We demonstrate that our TBNB classifier together with the ground-truth generation approaches can effectively identify the community affiliations of Twitter users. Finally, to show the generalisation of our approaches, we apply our approaches to analyse 3.6 million tweets related to US Election 2016 on Twitter [7]. We show that our TBNB approach can effectively identify the 'who', i.e. classify Twitter users into communities. To investigate 'what' these communities have discussed, we apply our time-sensitive topic modelling approach to extract coherent topics. We finally analyse the community-related topics evaluated and selected using our proposed topic coherence metrics. Overall, we contribute to provide effective approaches to assist social scientists towards analysing political events on Twitter. These approaches include topic coherence metrics, a time-sensitive topic modelling approach and approaches for classifying the community affiliations of Twitter users. Together they make progress to study and understand the connections and dynamics among communities on Twitter. Supervisors : Iadh Ounis, Craig Macdonald, Philip Habel The thesis is available at http://theses.gla.ac.uk/41135/


2020 ◽  
pp. 107754632095495
Author(s):  
Bing Wang ◽  
Xiong Hu ◽  
Tao X Mei ◽  
Sun D Jian ◽  
Wang Wei

In allusion to the issue of rolling bearing degradation feature extraction and degradation condition clustering, a logistic chaotic map is introduced to analyze the advantages of C0 complexity and a technique based on a multidimensional degradation feature and Gath–Geva fuzzy clustering algorithmic is proposed. The multidimensional degradation feature includes C0 complexity, root mean square, and curved time parameter which is more in line with the performance degradation process. Gath–Geva fuzzy clustering is introduced to divide different conditions during the degradation process. A rolling bearing lifetime vibration signal from intelligent maintenance system bearing test center was introduced for instance analysis. The results show that C0 complexity is able to describe the degradation process and has advantages in sensitivity and calculation speed. The introduced degradation indicator curved time parameter can reflect the agglomeration character of the degradation condition at time dimension, which is more in line with the performance degradation pattern of mechanical equipment. The Gath–Geva fuzzy clustering algorithmic is able to cluster degradation condition of mechanical equipment such as bearings accurately.


Sign in / Sign up

Export Citation Format

Share Document