scholarly journals Benchmarking homogenization algorithms for monthly data

2012 ◽  
Vol 8 (1) ◽  
pp. 89-115 ◽  
Author(s):  
V. K. C. Venema ◽  
O. Mestre ◽  
E. Aguilar ◽  
I. Auer ◽  
J. A. Guijarro ◽  
...  

Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

2011 ◽  
Vol 7 (4) ◽  
pp. 2655-2718 ◽  
Author(s):  
V. K. C. Venema ◽  
O. Mestre ◽  
E. Aguilar ◽  
I. Auer ◽  
J. A. Guijarro ◽  
...  

Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.


1982 ◽  
Vol 63 (1) ◽  
pp. 23-28 ◽  
Author(s):  
Robert P. Harnack ◽  
William R. Sammler

The 1976 version of the University of Wisconsin model's ultra long-range forecasts of monthly mean temperature and precipitation were verified for selected United States stations over the period 1976–80. In an overall sense, neither the pentad category forecasts for four months, nor the individual year forecasts for two months, showed significant skill relative to random chance expectation. Slight positive skill was found for the July precipitation forecasts. Considerable variability of skill scores were seen from one month type to another, and from year to year. The lack of demonstrated significant skill overall for the 1976–80 period contrasts with the positive results reported by the modelers for independent sample forecasts made for the period 1961–75.


2020 ◽  
Vol 12 (11) ◽  
pp. 4460 ◽  
Author(s):  
Mohammadsoroush Tafazzoli ◽  
Ehsan Mousavi ◽  
Sharareh Kermanshachi

Although the two concepts of lean and sustainable construction have been developed due to different incentives, and they do not pursue the same exact goals, there exists considerable commonality between them. This paper discusses the potentials for integrating the two approaches and their practices and how the resulting synergy from combining the two methods can potentially lead to higher levels of fulfilling the individual goals of each of them. Some limitations and challenges to implementing the integrated approach are also discussed. Based on a comprehensive review of existing papers related to sustainable and lean construction topics, the commonality between the two approaches is discussed and grouped in five categories of (1) cost savings, (2) waste minimization, (3) Jobsite safety improvement, (4) reduced energy consumption, and (5) customers’ satisfaction improvement. The challenges of this integration are similarly identified and discussed in the four main categories of (1) additional initial costs to the project, (2) difficulty of providing specialized expertise, (3) contractors’ unwillingness to adopt the additional requirements, and (4) challenges to establish a high level of teamwork. Industry professionals were then interviewed to rank the elements in each of the two categories of opportunities and challenges. The results of the study highlight how future research can pursue the development of a new Green-Lean approach by investing in the communalities and meeting the challenges of this integration.


2021 ◽  
Vol 11 (14) ◽  
pp. 6594
Author(s):  
Yu-Chia Hsu

The interdisciplinary nature of sports and the presence of various systemic and non-systemic factors introduce challenges in predicting sports match outcomes using a single disciplinary approach. In contrast to previous studies that use sports performance metrics and statistical models, this study is the first to apply a deep learning approach in financial time series modeling to predict sports match outcomes. The proposed approach has two main components: a convolutional neural network (CNN) classifier for implicit pattern recognition and a logistic regression model for match outcome judgment. First, the raw data used in the prediction are derived from the betting market odds and actual scores of each game, which are transformed into sports candlesticks. Second, CNN is used to classify the candlesticks time series on a graphical basis. To this end, the original 1D time series are encoded into 2D matrix images using Gramian angular field and are then fed into the CNN classifier. In this way, the winning probability of each matchup team can be derived based on historically implied behavioral patterns. Third, to further consider the differences between strong and weak teams, the CNN classifier adjusts the probability of winning the match by using the logistic regression model and then makes a final judgment regarding the match outcome. We empirically test this approach using 18,944 National Football League game data spanning 32 years and find that using the individual historical data of each team in the CNN classifier for pattern recognition is better than using the data of all teams. The CNN in conjunction with the logistic regression judgment model outperforms the CNN in conjunction with SVM, Naïve Bayes, Adaboost, J48, and random forest, and its accuracy surpasses that of betting market prediction.


2020 ◽  
Vol 13 (2) ◽  
pp. 407-442
Author(s):  
Nadia Naim

AbstractThe purpose of this article is to assess how Islamic finance can act as a vehicle to enhance the current intellectual property rights regime in the Gulf Cooperation Council (GCC). Islamic finance has developed within the constraints of sharia law and has been a growth sector for the GCC. This article will identify the main principles of Islamic finance that contribute to the success of Islamic finance, which can enhance intellectual property protection in the GCC. The main sharia-compliant areas to be considered are musharaka, mudaraba, murabaha, takaful, istisna, ijara, salam and sukuk. The article will outline the founding principles of Islamic finance, the governance of sharia boards, development of Islamic finance in the individual GCC states, different frameworks of sharia-compliant investment products and the impact of intellectual property rights on the varying Islamic finance investment tools. Furthermore, the article will discuss an integrated approach to intellectual property rights which learns lessons from the Islamic finance sector in relation to infrastructure, regulation and sharia compliance. The lessons learnt from Islamic finance will inform the overall framework of recommendations for an Islamic intellectual property model. The use of Islamic finance as a vehicle to promote better intellectual property rights in terms of defining a new intellectual property approach is novel. It is aimed at spearheading further research in this area, and it will form a part of the overall integrated approach proposals to intellectual property protection in the GCC and beyond.


2019 ◽  
Vol 2 ◽  
pp. 205920431984735
Author(s):  
Roger T. Dean ◽  
Andrew J. Milne ◽  
Freya Bailes

Spectral pitch similarity (SPS) is a measure of the similarity between spectra of any pair of sounds. It has proved powerful in predicting perceived stability and fit of notes and chords in various tonal and microtonal instrumental contexts, that is, with discrete tones whose spectra are harmonic or close to harmonic. Here we assess the possible contribution of SPS to listeners’ continuous perceptions of change in music with fewer discrete events and with noisy or profoundly inharmonic sounds, such as electroacoustic music. Previous studies have shown that time series of perception of change in a range of music can be reasonably represented by time series models, whose predictors comprise autoregression together with series representing acoustic intensity and, usually, the timbral parameter spectral flatness. Here, we study possible roles for SPS in such models of continuous perceptions of change in a range of both instrumental (note-based) and sound-based music (generally containing more noise and fewer discrete events). In the first analysis, perceived change in three pieces of electroacoustic and one of piano music is modeled, to assess the possible contribution of (de-noised) SPS in cooperation with acoustic intensity and spectral flatness series. In the second analysis, a broad range of nine pieces is studied in relation to the wider range of distinctive spectral predictors useful in previous perceptual work, together with intensity and SPS. The second analysis uses cross-sectional (mixed-effects) time series analysis to take advantage of all the individual response series in the dataset, and to assess the possible generality of a predictive role for SPS. SPS proves to be a useful feature, making a predictive contribution distinct from other spectral parameters. Because SPS is a psychoacoustic “bottom up” feature, it may have wide applicability across both the familiar and the unfamiliar in the music to which we are exposed.


2018 ◽  
Vol 25 (2) ◽  
pp. 291-300 ◽  
Author(s):  
Berenice Rojo-Garibaldi ◽  
David Alberto Salas-de-León ◽  
María Adela Monreal-Gómez ◽  
Norma Leticia Sánchez-Santillán ◽  
David Salas-Monreal

Abstract. Hurricanes are complex systems that carry large amounts of energy. Their impact often produces natural disasters involving the loss of human lives and materials, such as infrastructure, valued at billions of US dollars. However, not everything about hurricanes is negative, as hurricanes are the main source of rainwater for the regions where they develop. This study shows a nonlinear analysis of the time series of the occurrence of hurricanes in the Gulf of Mexico and the Caribbean Sea obtained from 1749 to 2012. The construction of the hurricane time series was carried out based on the hurricane database of the North Atlantic basin hurricane database (HURDAT) and the published historical information. The hurricane time series provides a unique historical record on information about ocean–atmosphere interactions. The Lyapunov exponent indicated that the system presented chaotic dynamics, and the spectral analysis and nonlinear analyses of the time series of the hurricanes showed chaotic edge behavior. One possible explanation for this chaotic edge is the individual chaotic behavior of hurricanes, either by category or individually regardless of their category and their behavior on a regular basis.


2021 ◽  
Author(s):  
Malte Oeljeklaus

This thesis investigates methods for traffic scene perception with monocular cameras for a basic environment model in the context of automated vehicles. The developed approach is designed with special attention to the computational limitations present in practical systems. For this purpose, three different scene representations are investigated. These consist of the prevalent road topology as the global scene context, the drivable road area and the detection and spatial reconstruction of other road users. An approach is developed that allows for the simultaneous perception of all environment representations based on a multi-task convolutional neural network. The obtained results demonstrate the efficiency of the multi-task approach. In particular, the effects of shareable image features for the perception of the individual scene representations were found to improve the computational performance. Contents Nomenclature VII 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Related Work and Fundamental Background 8 2.1 Advances in CNN...


2018 ◽  
Vol 10 (11) ◽  
pp. 4112 ◽  
Author(s):  
Alessandra Durazzo ◽  
Johannes Kiefer ◽  
Massimo Lucarini ◽  
Emanuela Camilli ◽  
Stefania Marconi ◽  
...  

Italian cuisine and its traditional recipes experience an ever-increasing popularity around the world. The “Integrated Approach” is the key to modern food research and the innovative challenge for analyzing and modeling agro-food systems in their totality. The present study aims at applying and evaluating Fourier Transformed Infrared (FTIR) spectroscopy for the analysis of complex food matrices and food preparations. Nine traditional Italian recipes, including First courses, One-dish meals, Side courses, and Desserts, were selected and experimentally prepared. Prior to their analysis via FTIR spectroscopy, the samples were homogenized and lyophilized. The IR spectroscopic characterization and the assignment of the main bands was carried out. Numerous peaks, which correspond to functional groups and modes of vibration of the individual components, were highlighted. The spectra are affected by both the preparation procedures, the cooking methods, and the cooking time. The qualitative analysis of the major functional groups can serve as a basis for a discrimination of the products and the investigation of fraud. For this purpose, the FTIR spectra were evaluated using Principal Component Analysis (PCA). Our results show how the utilization of vibrational spectroscopy combined with a well-established chemometric data analysis method represents a potentially powerful tool in research linked to the food sector and beyond. This study is a first step towards the development of new indicators of food quality.


2019 ◽  
Vol 11 (1) ◽  
pp. 81-97 ◽  
Author(s):  
Syed Awais Ahmad Tipu

Purpose This paper aims to review the academic literature on business plan competitions in developed and emerging economies to assess the contribution to the knowledge so far and identify research gaps. Design/methodology/approach A variety of databases (such as ABI/Inform Global, Academic Search Complete, Business Source Premier and Emerald Full Text) were used to find peer-reviewed journal articles. Regardless of time, different search terms were used to find relevant journal articles such as business plan competitions, business plan contests, business plan teams, business plan judges, business plan development and business plan scores. After a careful review of the identified articles, a total of 22 articles were included in the final review. The articles in the final set were manually coded using the thematic codes. Findings Despite the popularity of business plan competitions, limited academic literature exists, particularly in the context of emerging economies. A total of 16 out of 22 studies are conducted in developed economies. The findings suggest that the literature on business plan competitions is largely centered on the structure of business plan competitions, the characteristics of the participating teams and the benefits of business plan competitions. The individual level benefits of business plan competitions include the development of entrepreneurial skills, opportunity for networking and access to mentors. Business plan competitions can be better aligned with public policy, particularly in case of emerging economies. Therefore, a more focused and integrated approach among industry, academia and government in encouraging business plan competitions could potentially make a far-reaching impact in establishing an enterprising society. While much is known about the structure and the benefits of business plan competitions, there are various research gaps which need to be addressed. Originality/value The current paper is the first identifiable review of the literature on business plan competitions. The proposed questions for future research will potentially help in addressing the identified research gaps.


Sign in / Sign up

Export Citation Format

Share Document