scholarly journals Understanding the Effect of Baseline Modeling Implementation Choices on Analysis of Demand Response Performance

Author(s):  
Nathan Addy ◽  
Johanna L. Mathieu ◽  
Sila Kiliccote ◽  
Duncan S. Callaway

Accurate evaluation of the performance of buildings participating in Demand Response (DR) programs is critical to the adoption and improvement of these programs. Typically, we calculate load sheds during DR events by comparing observed electric demand against counterfactual predictions made using statistical baseline models. Many baseline models exist and these models can produce different shed calculations. Moreover, modelers implementing the same baseline model can make different modeling implementation choices, which may affect shed estimates. In this work, using real data, we analyze the effect of different modeling implementation choices on shed predictions. We focused on five issues: weather data source, resolution of data, methods for determining when buildings are occupied, methods for aligning building data with temperature data, and methods for power outage filtering. Results indicate sensitivity to the weather data source and data filtration methods as well as an immediate potential for automation of methods to choose building occupied modes.

Author(s):  
Nathan Addy ◽  
Johanna L. Mathieu ◽  
Sila Kiliccote ◽  
Duncan S. Callaway

Accurate evaluation of the performance of buildings participating in Demand Response (DR) programs is critical to the adoption and improvement of these programs. Typically, we calculate load sheds during DR events by comparing observed electric load against counterfactual predictions made using statistical baseline models. Many baseline models exist and these models can produce different shed estimates. Moreover, modelers implementing the same baseline model can make different modeling implementation choices, which may affect shed estimates. In this work, using real data, we analyze the effect of different modeling implementation choices on shed estimates. We focus on five issues: weather data source, resolution of data, methods for determining when buildings are occupied, methods for aligning building data with temperature data, and methods for power outage filtering. Results indicate sensitivity to the weather data source and data filtration methods as well as an immediate potential for automation of methods to choose building occupied modes.


2014 ◽  
Vol 137 (2) ◽  
Author(s):  
Nathan J. Addy ◽  
Sila Kiliccote ◽  
Duncan S. Callaway ◽  
Johanna L. Mathieu

The performance of buildings participating in demand response (DR) programs is usually evaluated with baseline models, which predict what electric demand would have been if a DR event had not been called. Different baseline models produce different results. Moreover, modelers implementing the same baseline model often make different model implementation choices producing different results. Using real data from a DR program in CA and a regression-based baseline model, which relates building demand to time of week, outdoor air temperature, and building operational mode, we analyze the effect of model implementation choices on DR shed estimates. Results indicate strong sensitivities to the outdoor air temperature data source and bad data filtration methods, with standard deviations of differences in shed estimates of ≈20–30 kW, and weaker sensitivities to demand/temperature data resolution, data alignment, and methods for determining when buildings are occupied, with standard deviations of differences in shed estimates of ≈2–5 kW.


Author(s):  
Nada M. Alhakkak

BigGIS is a new product that resulted from developing GIS in the “Big Data” area, which is used in storing and processing big geographical data and helps in solving its issues. This chapter describes an optimized Big GIS framework in Map Reduce Environment M2BG. The suggested framework has been integrated into Map Reduce Environment in order to solve the storage issues and get the benefit of the Hadoop environment. M2BG include two steps: Big GIS warehouse and Big GIS Map Reduce. The first step contains three main layers: Data Source and Storage Layer (DSSL), Data Processing Layer (DPL), and Data Analysis Layer (DAL). The second layer is responsible for clustering using swarms as inputs for the Hadoop phase. Then it is scheduled in the mapping part with the use of a preempted priority scheduling algorithm; some data types are classified as critical and some others are ordinary data type; the reduce part used, merge sort algorithm M2BG, should solve security and be implemented with real data in the simulated environment and later in the real world.


2020 ◽  
Vol 73 (5) ◽  
pp. 1159-1178
Author(s):  
Lu Tao ◽  
Pan Zhang ◽  
Lixin Yan ◽  
Dunyao Zhu

The lane-level map, which contains the lane-level information severely lacking in widely used commercial navigation maps, has become an essential data source for autonomous driving systems. The linking relations between lane-level map and commercial navigation map can facilitate an autonomous driving system mapping information between different applications using different maps. In this paper, an approach is proposed to build the linking relations automatically. The different topology networks are first reconstructed into similar structures. Then, to build the linking relations automatically, the adaptive multi-filter algorithm and forward path exploring algorithm are proposed to detect corresponding junctions and paths, respectively. The approach is validated by two real data sets of more than 150 km of roads, mainly highway. The linking relations for nearly 94% of the total road length have been built successfully.


Author(s):  
Peter J. Bosscher ◽  
Hussain U. Bahia ◽  
Suwitho Thomas ◽  
Jeffrey S. Russell

Six test sections were constructed on US-53 in Trempealeau County by using different performance-graded asphalt binders to validate the Superpave pavement temperature algorithm and the binder specification limits. Field instrumentation was installed in two of the test sections to monitor the thermal behavior of the pavement as affected by weather. The instrumentation was used specifically to monitor the temperature of the test sections as a function of time and depth from the pavement surface. A meteorological station was assembled at the test site to monitor weather conditions, including air temperature. Details of the instrumentation systems used and analysis of the data collected during the first 22 months of the project are presented. The analysis was focused on development of a statistical model for estimation of low and high pavement temperatures from meteorological data. The model was compared to the Superpave recommended model and to the more recent model recommended by the Long-Term Pavement Performance (LTPP) program. The temperature data analysis indicates a strong agreement between the new model and the LTPP model for the estimation of low pavement design temperature. However, the analysis indicates that the LTPP and Superpave models underestimate the high pavement design temperature at air temperatures higher than 30°C. The temperature data analyses also indicate that there are significant differences between the standard deviation of air temperatures and the standard deviation of the pavement temperatures. These differences raise some questions about the accuracy of the reliability estimates used in the current Superpave recommendations.


2017 ◽  
Author(s):  
Fakhereh Alidoost ◽  
Alfred Stein ◽  
Zhongbo Su ◽  
Ali Sharifi

Abstract. Data retrieved from global weather forecast systems are typically biased with respect to measurements at local weather stations. This paper presents three copula-based methods for bias correction of daily air temperature data derived from the European Centre for Medium-range Weather Forecasts (ECMWF). The aim is to predict conditional copula quantiles at different unvisited locations, assuming spatial stationarity of the underlying random field. The three new methods are: bivariate copula quantile mapping (types I and II), and a quantile search. These are compared with commonly applied methods, using data from an agricultural area in the Qazvin Plain in Iran containing five weather stations. Cross-validation is carried out to assess the performance. The study shows that the new methods are able to predict the conditional quantiles at unvisited locations, improve the higher order moments of marginal distributions, and take the spatial variabilities of the bias-corrected variable into account. It further illustrates how a choice of the bias correction method affects the bias-corrected variable and highlights both theoretical and practical issues of the methods. We conclude that the three new methods improve local refinement of weather data, in particular if a low number of observations is available.


2017 ◽  
Author(s):  
David Morris ◽  
John Pinnegar ◽  
David Maxwell ◽  
Stephen Dye ◽  
Liam Fernand ◽  
...  

Abstract. The datasets described here bring together quality-controlled seawater temperature measurements, from over 130 years of Departmental government-funded marine science investigations in the UK (United Kingdom). Since before the foundation of a Marine Biological Association fisheries laboratory in 1902 and through subsequent evolutions as the Directorate of Fisheries Research and the current Centre for Environment Fisheries & Aquaculture Science, UK Government marine scientists and observers have been collecting seawater temperature data as part of oceanographic, chemical, biological, radiological, and other policy driven research and observation programmes in UK waters. These datasets start with a few tens of records per year, rise to hundreds from the early 1900s, thousands by 1959, hundreds of thousands by the 1980s, peaking with > 1 million for some years from 2000 onwards. The data source systems vary from time series at coastal monitoring stations or offshore platforms (buoys), through repeated research cruises or opportunistic sampling from ferry routes, to temperature extracts from CTD (Conductivity Temperature Depth) profiles, oceanographic, fishery and plankton tows, and data collected from recreational scuba divers or electronic devices attached to marine animals. The datasets described have not been included in previous seawater temperature collation exercises (e.g. International Comprehensive Ocean-Atmosphere Data Set, Met Office Hadley Centre Sea Surface Temperature data set, Centennial in situ Observation-Based Estimate Sea Surface Temperature data), although some summary data reside in the British Oceanographic Data Centre (BODC) archive, the Marine Environment Monitoring and Assessment National (MERMAN) database and the International Council for the Exploration of the Seas (ICES) Data Centre. We envisage the data primarily providing a biologically and ecosystem-relevant context for regional assessments of changing hydrological conditions around the British Isles, although cross matching with satellite derived data for surface temperatures at specific times and in specific areas is another area where the data could be of value (see e.g. Smit et al., 2013). Maps are provided indicating geographical coverage which is generally within and around UK Continental Shelf area, but occasionally extending north from Labrador and Greenland, to east of Svalbard, and southward to the Bay of Biscay. Example potential uses of the data are described using plots of data in four selected groups of 4 ICES Rectangles covering areas of particular fisheries interest. The full dataset enables extensive data synthesis, for example in the southern North Sea, where issues of spatial and numerical bias from a data source are explored. The full dataset also facilitates the construction of long-term temperature time series and an examination of changes in the phenology (seasonal timing) of ecosystem processes. This is done for a wide geographic area with an exploration of the limitations of data coverage over long periods. Throughout, we highlight and explore potential issues around the simple combination of data from the diverse and disparate sources collated here. The datasets are available on the Cefas Data Hub (https://www.cefas.co.uk/cefas-data-hub/).


Author(s):  
Takanobu Otsuka ◽  
Yuji Kitazawa ◽  
Takayuki Ito

Aquaculture is growing ever more important due to the decrease in natural marine resources and increase inworldwide demand. To avoid losses due to aging and abnormalweather, it is important to predict seawater temperature in order to maintain a more stable supply, particularly for high value added products, such as pearls and scallops. The increase in species extinction is a prominent societal issue. Furthermore, in order to maintain a stable quality of farmed fishery, water temperature should be measured daily and farming methods altered according to seasonal stresses. In this paper, we propose an algorithm to estimate seawater temperature in marine aquaculture by combining seawater temperature data and actual weather data.


2005 ◽  
Vol 15 (3) ◽  
pp. 572-576
Author(s):  
Matthew L. Richardson ◽  
Dewey M. Caron

Various instruments and contract services can be used to calculate degree-days. This study compared instruments and services to the Wescor Biophenometer, an instrument used by cooperators of the Southeast Pennsylvania IPM Research Group (SE PA IPM RG) throughout Delaware and southeastern Pennsylvania for 10 years. Instruments evaluated in the study were the Wescor Biophenometer Datalogger, Avatel HarvestGuard, Avatel Datascribe Junior, Davis Weather Monitor II, Accu-Trax, and the HOBO H8 Pro Temperature Data Logger. The services were SkyBit and national weather data. Different combinations of instruments and services were used at three locations in Pennsylvania and four locations in Delaware over a 2-year period. We checked the degree-day accumulation of each instrument and service weekly and made statistical comparisons among the instruments and services at each site. To further construct a comparison of the instruments, we noted distinctive qualities of each instrument, interviewed the manufacturers, and received feedback from SE PA IPM RG members who used the instruments. We evaluated the instruments' algorithms, durability, cost, temperature sampling interval, ease of use, time input required by the user, and other distinctive factors. Statistically, there were no significant differences in degree-day accumulations between the Biophenometer, Harvest-Guard, Datascribe, Weather Monitor II, Skybit, or weather service data. However, cost and time required to access/interpret data and personal preference should be major considerations in choosing an instrument or service to measure degree-days.


Author(s):  
Ehsan Saeidpour Parizy ◽  
Ali Jahanbani Ardakani ◽  
Arash Mohammadi ◽  
Kenneth A. Loparo

Sign in / Sign up

Export Citation Format

Share Document