scholarly journals Stochastic Modeling of Stratospheric Temperature

Author(s):  
Mari Dahl Eggen ◽  
Kristina Rognlien Dahl ◽  
Sven Peter Näsholm ◽  
Steffen Mæland

AbstractThis study suggests a stochastic model for time series of daily zonal (circumpolar) mean stratospheric temperature at a given pressure level. It can be seen as an extension of previous studies which have developed stochastic models for surface temperatures. The proposed model is a combination of a deterministic seasonality function and a Lévy-driven multidimensional Ornstein–Uhlenbeck process, which is a mean-reverting stochastic process. More specifically, the deseasonalized temperature model is an order 4 continuous-time autoregressive model, meaning that the stratospheric temperature is modeled to be directly dependent on the temperature over four preceding days, while the model’s longer-range memory stems from its recursive nature. This study is based on temperature data from the European Centre for Medium-Range Weather Forecasts ERA-Interim reanalysis model product. The residuals of the autoregressive model are well represented by normal inverse Gaussian-distributed random variables scaled with a time-dependent volatility function. A monthly variability in speed of mean reversion of stratospheric temperature is found, hence suggesting a generalization of the fourth-order continuous-time autoregressive model. A stochastic stratospheric temperature model, as proposed in this paper, can be used in geophysical analyses to improve the understanding of stratospheric dynamics. In particular, such characterizations of stratospheric temperature may be a step towards greater insight in modeling and prediction of large-scale middle atmospheric events, such as sudden stratospheric warming. Through stratosphere–troposphere coupling, the stratosphere is hence a source of extended tropospheric predictability at weekly to monthly timescales, which is of great importance in several societal and industry sectors.

2021 ◽  
Author(s):  
Mari Eggen ◽  
Kristina Rognlien Dahl ◽  
Sven Peter Näsholm ◽  
Steffen Mæland

<p>A stochastic model for daily-spatial mean stratospheric temperature over a given area is suggested. The model is a sum of a deterministic seasonality function and a Lévy driven vectorial Ornstein-Uhlenbeck process, which is a mean-reverting stochastic process. More specifically, the model is an order 4 continuous-time autoregressive (CAR(4)) process, derived from data analysis suggesting an order 4 autoregressive (AR(4)) process to model the deseasonalized stochastic temperature data empirically. In this analysis, temperature data as represented in ECMWF re-analysis model products are considered. The residuals of the AR(4) process turn out to be normal inverse Gaussian distributed random variables scaled with a time dependent volatility function. In general, it is possible to show that the discrete time AR(p) process is closely related to CAR(p) processes, its continuous counterpart. An equivalent effort is made in deriving a dual stochastic model for stratospheric temperature, in the sense that the year is divided into summer and winter seasons. However, this seems to further complicate the modelling, rather than obtaining a simplified analytic framework. A stochastic characterization of the stratospheric temperature representation in model products, such as the model proposed in this paper, can be used in geophysical analyses to improve our understanding of stratospheric dynamics. In particular, such characterizations of stratospheric temperature may be a step towards greater insight in modelling and prediction of large-scale middle atmospheric events like sudden stratospheric warmings. Through stratosphere-troposphere coupling, this is important in the work towards an extended predictability of long-term tropospheric weather forecasting.</p>


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2010 ◽  
Vol 23 (12) ◽  
pp. 3157-3180 ◽  
Author(s):  
N. Eckert ◽  
H. Baya ◽  
M. Deschatres

Abstract Snow avalanches are natural hazards strongly controlled by the mountain winter climate, but their recent response to climate change has thus far been poorly documented. In this paper, hierarchical modeling is used to obtain robust indexes of the annual fluctuations of runout altitudes. The proposed model includes a possible level shift, and distinguishes common large-scale signals in both mean- and high-magnitude events from the interannual variability. Application to the data available in France over the last 61 winters shows that the mean runout altitude is not different now than it was 60 yr ago, but that snow avalanches have been retreating since 1977. This trend is of particular note for high-magnitude events, which have seen their probability rates halved, a crucial result in terms of hazard assessment. Avalanche control measures, observation errors, and model limitations are insufficient explanations for these trends. On the other hand, strong similarities in the pattern of behavior of the proposed runout indexes and several climate datasets are shown, as well as a consistent evolution of the preferred flow regime. The proposed runout indexes may therefore be usable as indicators of climate change at high altitudes.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 79 ◽  
Author(s):  
Xiaoyu Han ◽  
Yue Zhang ◽  
Wenkai Zhang ◽  
Tinglei Huang

Relation extraction is a vital task in natural language processing. It aims to identify the relationship between two specified entities in a sentence. Besides information contained in the sentence, additional information about the entities is verified to be helpful in relation extraction. Additional information such as entity type getting by NER (Named Entity Recognition) and description provided by knowledge base both have their limitations. Nevertheless, there exists another way to provide additional information which can overcome these limitations in Chinese relation extraction. As Chinese characters usually have explicit meanings and can carry more information than English letters. We suggest that characters that constitute the entities can provide additional information which is helpful for the relation extraction task, especially in large scale datasets. This assumption has never been verified before. The main obstacle is the lack of large-scale Chinese relation datasets. In this paper, first, we generate a large scale Chinese relation extraction dataset based on a Chinese encyclopedia. Second, we propose an attention-based model using the characters that compose the entities. The result on the generated dataset shows that these characters can provide useful information for the Chinese relation extraction task. By using this information, the attention mechanism we used can recognize the crucial part of the sentence that can express the relation. The proposed model outperforms other baseline models on our Chinese relation extraction dataset.


VLSI Design ◽  
1998 ◽  
Vol 8 (1-4) ◽  
pp. 53-58
Author(s):  
Christopher M. Snowden

A fully coupled electro-thermal hydrodynamic model is described which is suitable for modelling active devices. The model is applied to the non-isothermal simulation of pseudomorphic high electron mobility transistors (pHEMTs). A large-scale surface temperature model is described which allows thermal modelling of semiconductor devices and monolithic circuits. An example of the application of thermal modelling to monolithic circuit characterization is given.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Author(s):  
Dongbo Xi ◽  
Fuzhen Zhuang ◽  
Yanchi Liu ◽  
Jingjing Gu ◽  
Hui Xiong ◽  
...  

Human mobility data accumulated from Point-of-Interest (POI) check-ins provides great opportunity for user behavior understanding. However, data quality issues (e.g., geolocation information missing, unreal check-ins, data sparsity) in real-life mobility data limit the effectiveness of existing POIoriented studies, e.g., POI recommendation and location prediction, when applied to real applications. To this end, in this paper, we develop a model, named Bi-STDDP, which can integrate bi-directional spatio-temporal dependence and users’ dynamic preferences, to identify the missing POI check-in where a user has visited at a specific time. Specifically, we first utilize bi-directional global spatial and local temporal information of POIs to capture the complex dependence relationships. Then, target temporal pattern in combination with user and POI information are fed into a multi-layer network to capture users’ dynamic preferences. Moreover, the dynamic preferences are transformed into the same space as the dependence relationships to form the final model. Finally, the proposed model is evaluated on three large-scale real-world datasets and the results demonstrate significant improvements of our model compared with state-of-the-art methods. Also, it is worth noting that the proposed model can be naturally extended to address POI recommendation and location prediction tasks with competitive performances.


Sign in / Sign up

Export Citation Format

Share Document