scholarly journals Deep Learning-Based Approaches to Optimize the Electricity Contract Capacity Problem for Commercial Customers

Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2181
Author(s):  
Rafik Nafkha ◽  
Tomasz Ząbkowski ◽  
Krzysztof Gajowniczek

The electricity tariffs available to customers in Poland depend on the connection voltage level and contracted capacity, which reflect the customer demand profile. Therefore, before connecting to the power grid, each consumer declares the demand for maximum power. This amount, referred to as the contracted capacity, is used by the electricity provider to assign the proper connection type to the power grid, including the size of the security breaker. Maximum power is also the basis for calculating fixed charges for electricity consumption, which is controlled and metered through peak meters. If the peak demand exceeds the contracted capacity, a penalty charge is applied to the exceeded amount, which is up to ten times the basic rate. In this article, we present several solutions for entrepreneurs based on the implementation of two-stage and deep learning approaches to predict maximal load values and the moments of exceeding the contracted capacity in the short term, i.e., up to one month ahead. The forecast is further used to optimize the capacity volume to be contracted in the following month to minimize network charge for exceeding the contracted level. As confirmed experimentally with two datasets, the application of a multiple output forecast artificial neural network model and a genetic algorithm (two-stage approach) for load optimization delivers significant benefits to customers. As an alternative, the same benefit is delivered with a deep learning architecture (hybrid approach) to predict the maximal capacity demands and, simultaneously, to determine the optimal capacity contract.

Author(s):  
Amin Vahedian ◽  
Xun Zhou ◽  
Ling Tong ◽  
W. Nick Street ◽  
Yanhua Li

Urban dispersal events are processes where an unusually large number of people leave the same area in a short period. Early prediction of dispersal events is important in mitigating congestion and safety risks and making better dispatching decisions for taxi and ride-sharing fleets. Existing work mostly focuses on predicting taxi demand in the near future by learning patterns from historical data. However, they fail in case of abnormality because dispersal events with abnormally high demand are non-repetitive and violate common assumptions such as smoothness in demand change over time. Instead, in this paper we argue that dispersal events follow a complex pattern of trips and other related features in the past, which can be used to predict such events. Therefore, we formulate the dispersal event prediction problem as a survival analysis problem. We propose a two-stage framework (DILSA), where a deep learning model combined with survival analysis is developed to predict the probability of a dispersal event and its demand volume. We conduct extensive case studies and experiments on the NYC Yellow taxi dataset from 20142016. Results show that DILSA can predict events in the next 5 hours with F1-score of 0:7 and with average time error of 18 minutes. It is orders of magnitude better than the state-of-the-art deep learning approaches for taxi demand prediction.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shan Guleria ◽  
Tilak U. Shah ◽  
J. Vincent Pulido ◽  
Matthew Fasullo ◽  
Lubaina Ehsan ◽  
...  

AbstractProbe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett’s esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches—a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.


2020 ◽  
pp. 1-10
Author(s):  
Colin J. McMahon ◽  
Justin T. Tretter ◽  
Theresa Faulkner ◽  
R. Krishna Kumar ◽  
Andrew N. Redington ◽  
...  

Abstract Objective: This study investigated the impact of the Webinar on deep human learning of CHD. Materials and methods: This cross-sectional survey design study used an open and closed-ended questionnaire to assess the impact of the Webinar on deep learning of topical areas within the management of the post-operative tetralogy of Fallot patients. This was a quantitative research methodology using descriptive statistical analyses with a sequential explanatory design. Results: One thousand-three-hundred and seventy-four participants from 100 countries on 6 continents joined the Webinar, 557 (40%) of whom completed the questionnaire. Over 70% of participants reported that they “agreed” or “strongly agreed” that the Webinar format promoted deep learning for each of the topics compared to other standard learning methods (textbook and journal learning). Two-thirds expressed a preference for attending a Webinar rather than an international conference. Over 80% of participants highlighted significant barriers to attending conferences including cost (79%), distance to travel (49%), time commitment (51%), and family commitments (35%). Strengths of the Webinar included expertise, concise high-quality presentations often discussing contentious issues, and the platform quality. The main weakness was a limited time for questions. Just over 53% expressed a concern for the carbon footprint involved in attending conferences and preferred to attend a Webinar. Conclusion: E-learning Webinars represent a disruptive innovation, which promotes deep learning, greater multidisciplinary participation, and greater attendee satisfaction with fewer barriers to participation. Although Webinars will never fully replace conferences, a hybrid approach may reduce the need for conferencing, reduce carbon footprint. and promote a “sustainable academia”.


2021 ◽  
Author(s):  
Isidro Lloret ◽  
José A. Troyano ◽  
Fernando Enríquez ◽  
Juan-José González-de-la-Rosa

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Sign in / Sign up

Export Citation Format

Share Document