scholarly journals Economizing the Uneconomic: Markets for Reliable, Sustainable, and Price Efficient Electricity

2021 ◽  
Vol 13 (8) ◽  
pp. 4197
Author(s):  
Mohammad Rasouli ◽  
Demosthenis Teneketzis

Current electricity markets do not efficiently achieve policy targets i.e., sustainability, reliability, and price efficiency. Thus, there are debates on how to achieve these targets by using either market mechanisms e.g., carbon and capacity markets, or non-market mechanisms such as offer-caps, price-caps, and market-monitoring. At the same time, major industry changes including demand response management technologies and large scale batteries bring more elasticity to demand; such changes will impact the methodology needed to achieve the above mentioned targets. This work provides market solutions that capture all three policy targets simultaneously and take into account the above-mentioned industry changes. The proposed solutions are based on: (i) a model of electricity markets that captures all the above mentioned electricity policy targets; (ii) mechanism design and the development of a framework for design of efficient auctions with constraints (individual, joint homogeneous, and joint non-homogeneous). The results show that, within the context of the proposed model, all policy targets can be achieved efficiently by separate capacity and carbon markets in addition to efficient spot markets. The results also highlight that all three policy targets can be achieved without any offer-cap, price-cap, or market monitoring. Thus, within the context of the proposed model, they provide clear answers to the above-mentioned policy debates.

Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2010 ◽  
Vol 23 (12) ◽  
pp. 3157-3180 ◽  
Author(s):  
N. Eckert ◽  
H. Baya ◽  
M. Deschatres

Abstract Snow avalanches are natural hazards strongly controlled by the mountain winter climate, but their recent response to climate change has thus far been poorly documented. In this paper, hierarchical modeling is used to obtain robust indexes of the annual fluctuations of runout altitudes. The proposed model includes a possible level shift, and distinguishes common large-scale signals in both mean- and high-magnitude events from the interannual variability. Application to the data available in France over the last 61 winters shows that the mean runout altitude is not different now than it was 60 yr ago, but that snow avalanches have been retreating since 1977. This trend is of particular note for high-magnitude events, which have seen their probability rates halved, a crucial result in terms of hazard assessment. Avalanche control measures, observation errors, and model limitations are insufficient explanations for these trends. On the other hand, strong similarities in the pattern of behavior of the proposed runout indexes and several climate datasets are shown, as well as a consistent evolution of the preferred flow regime. The proposed runout indexes may therefore be usable as indicators of climate change at high altitudes.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 79 ◽  
Author(s):  
Xiaoyu Han ◽  
Yue Zhang ◽  
Wenkai Zhang ◽  
Tinglei Huang

Relation extraction is a vital task in natural language processing. It aims to identify the relationship between two specified entities in a sentence. Besides information contained in the sentence, additional information about the entities is verified to be helpful in relation extraction. Additional information such as entity type getting by NER (Named Entity Recognition) and description provided by knowledge base both have their limitations. Nevertheless, there exists another way to provide additional information which can overcome these limitations in Chinese relation extraction. As Chinese characters usually have explicit meanings and can carry more information than English letters. We suggest that characters that constitute the entities can provide additional information which is helpful for the relation extraction task, especially in large scale datasets. This assumption has never been verified before. The main obstacle is the lack of large-scale Chinese relation datasets. In this paper, first, we generate a large scale Chinese relation extraction dataset based on a Chinese encyclopedia. Second, we propose an attention-based model using the characters that compose the entities. The result on the generated dataset shows that these characters can provide useful information for the Chinese relation extraction task. By using this information, the attention mechanism we used can recognize the crucial part of the sentence that can express the relation. The proposed model outperforms other baseline models on our Chinese relation extraction dataset.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Author(s):  
Dongbo Xi ◽  
Fuzhen Zhuang ◽  
Yanchi Liu ◽  
Jingjing Gu ◽  
Hui Xiong ◽  
...  

Human mobility data accumulated from Point-of-Interest (POI) check-ins provides great opportunity for user behavior understanding. However, data quality issues (e.g., geolocation information missing, unreal check-ins, data sparsity) in real-life mobility data limit the effectiveness of existing POIoriented studies, e.g., POI recommendation and location prediction, when applied to real applications. To this end, in this paper, we develop a model, named Bi-STDDP, which can integrate bi-directional spatio-temporal dependence and users’ dynamic preferences, to identify the missing POI check-in where a user has visited at a specific time. Specifically, we first utilize bi-directional global spatial and local temporal information of POIs to capture the complex dependence relationships. Then, target temporal pattern in combination with user and POI information are fed into a multi-layer network to capture users’ dynamic preferences. Moreover, the dynamic preferences are transformed into the same space as the dependence relationships to form the final model. Finally, the proposed model is evaluated on three large-scale real-world datasets and the results demonstrate significant improvements of our model compared with state-of-the-art methods. Also, it is worth noting that the proposed model can be naturally extended to address POI recommendation and location prediction tasks with competitive performances.


2011 ◽  
Vol 82 ◽  
pp. 722-727 ◽  
Author(s):  
Kristian Schellenberg ◽  
Norimitsu Kishi ◽  
Hisashi Kon-No

A system of multiple degrees of freedom composed out of three masses and three springs has been presented in 2008 for analyzing rockfall impacts on protective structures covered by a cushion layer. The model has then been used for a blind prediction of a large-scale test carried out in Sapporo, Japan, in November 2009. The test results showed substantial deviations from the blind predictions, which led to a deeper evaluation of the model input parameters showing a significant influence of the modeling properties for the cushion layer on the overall results. The cushion properties include also assumptions for the loading geometry and the definition of the parameters can be challenging. This paper introduces the test setup and the selected parameters in the proposed model for the blind prediction. After comparison with the test results, adjustments in the input parameters in order to match the test results have been evaluated. Conclusions for the application of the model as well as for further model improvements are drawn.


2012 ◽  
Vol 505 ◽  
pp. 65-74
Author(s):  
Lin Lin Lu ◽  
Xin Ma ◽  
Ya Xuan Wang

In this paper, a job shop scheduling model combining MAS (Multi-Agent System) with GASA (Simulated Annealing-Genetic Algorithm) is presented. The proposed model is based on the E2GPGP (extended extended generalized partial global planning) mechanism and utilizes the advantages of static intelligence algorithms with dynamic MAS. A scheduling process from ‘initialized macro-scheduling’ to ‘repeated micro-scheduling’ is designed for large-scale complex problems to enable to implement an effective and widely applicable prototype system for the job shop scheduling problem (JSSP). Under a set of theoretic strategies in the GPGP which is summarized in detail, E2GPGP is also proposed further. The GPGP-cooperation-mechanism is simulated by using simulation software DECAF for the JSSP. The results show that the proposed model based on the E2GPGP-GASA not only improves the effectiveness, but also reduces the resource cost.


Sign in / Sign up

Export Citation Format

Share Document