scholarly journals Oceanic Ecosystem Time-Series Programs: Ten Lessons Learned

Oceanography ◽  
2010 ◽  
Vol 23 (3) ◽  
pp. 104-125 ◽  
Author(s):  
David Karl
Author(s):  
Shadi Aljawarneh ◽  
Aurea Anguera ◽  
John William Atwood ◽  
Juan A. Lara ◽  
David Lizcano

AbstractNowadays, large amounts of data are generated in the medical domain. Various physiological signals generated from different organs can be recorded to extract interesting information about patients’ health. The analysis of physiological signals is a hard task that requires the use of specific approaches such as the Knowledge Discovery in Databases process. The application of such process in the domain of medicine has a series of implications and difficulties, especially regarding the application of data mining techniques to data, mainly time series, gathered from medical examinations of patients. The goal of this paper is to describe the lessons learned and the experience gathered by the authors applying data mining techniques to real medical patient data including time series. In this research, we carried out an exhaustive case study working on data from two medical fields: stabilometry (15 professional basketball players, 18 elite ice skaters) and electroencephalography (100 healthy patients, 100 epileptic patients). We applied a previously proposed knowledge discovery framework for classification purpose obtaining good results in terms of classification accuracy (greater than 99% in both fields). The good results obtained in our research are the groundwork for the lessons learned and recommendations made in this position paper that intends to be a guide for experts who have to face similar medical data mining projects.


Land ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 516
Author(s):  
Marcus V. F. Silveira ◽  
Caio A. Petri ◽  
Igor S. Broggio ◽  
Gabriel O. Chagas ◽  
Mateus S. Macul ◽  
...  

The 2019 fire crisis in Amazonia dominated global news and triggered fundamental questions about the possible causes behind it. Here we performed an in-depth investigation of the drivers of active fire anomalies in the Brazilian Amazon biome. We assessed a 2003–2019 time-series of active fires, deforestation, and water deficit and evaluated potential drivers of active fire occurrence in 2019, at the biome-scale, state level, and local level. Our results revealed abnormally high monthly fire counts in 2019 for the states of Acre, Amazonas, and Roraima. These states also differed from others by exhibiting in this year extreme levels of deforestation. Areas in 2019 with active fire occurrence significantly greater than the average across the biome had, on average, three times more active fires in the three previous years, six times more deforestation in 2019, and five times more deforestation in the five previous years. Approximately one-third of yearly active fires from 2003 to 2019 occurred up to 1 km from deforested areas in the same year, and one-third of deforested areas in a given year were located up to 500 m from deforested areas in the previous year. These findings provide critical information to support strategic decisions for fire prevention policies and fire combat actions.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-21
Author(s):  
Zijian Li ◽  
Ruichu Cai ◽  
Hong Wei Ng ◽  
Marianne Winslett ◽  
Tom Z. J. Fu ◽  
...  

Data-driven models are becoming essential parts in modern mechanical systems, commonly used to capture the behavior of various equipment and varying environmental characteristics. Despite the advantages of these data-driven models on excellent adaptivity to high dynamics and aging equipment, they are usually hungry for massive labels, mostly contributed by human engineers at a high cost. Fortunately, domain adaptation enhances the model generalization by utilizing the labeled source data and the unlabeled target data. However, the mainstream domain adaptation methods cannot achieve ideal performance on time series data, since they assume that the conditional distributions are equal. This assumption works well in the static data but is inapplicable for the time series data. Even the first-order Markov dependence assumption requires the dependence between any two consecutive time steps. In this article, we assume that the causal mechanism is invariant and present our Causal Mechanism Transfer Network (CMTN) for time series domain adaptation. By capturing causal mechanisms of time series data, CMTN allows the data-driven models to exploit existing data and labels from similar systems, such that the resulting model on a new system is highly reliable even with limited data. We report our empirical results and lessons learned from two real-world case studies, on chiller plant energy optimization and boiler fault detection, which outperform the existing state-of-the-art method.


2016 ◽  
Vol 50 (3) ◽  
pp. 109-113
Author(s):  
Michael G. Morley ◽  
Marlene A. Jeffries ◽  
Steven F. Mihály ◽  
Reyna Jenkyns ◽  
Ben R. Biffard

AbstractOcean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect continuous data on physical, chemical, biological, and geological ocean conditions over multiyear time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring QAQC (quality assurance and quality control) procedures to be developed for a variety of data types (Abeysirigunawardena et al., 2015). Acquiring long-term time series of oceanographic data from remote locations on the seafloor presents significant challenges from a QAQC perspective. In order to identify and study important scientific events and trends, data consolidated from multiple deployments and instruments need to be self-consistent and free of biases due to changes to instrument configurations, calibrations, metadata, biofouling, or a degradation in instrument performance. As a case study, this paper describes efforts at ONC to identify and correct systematic biases in ocean current directions measured by ADCPs (acoustic Doppler current profilers), as well as the lessons learned to improve future data quality.


Author(s):  
Witold Kinsner

Teaching digital signal processing at the graduate and undergraduate levels has a long tradition at universities. The signals include time series, still images, videos, and volumetric data such as radar and Doppler radar. The traditional topics such as spectral techniques in single-scale analysis and synthesis are now being expanded to include wavelet bases for multiscale analysis and synthesis [2]. The course described in this paper [1] expands the analysis to polyscale analysis and synthesis as it relates to self-affine processes and dynamical systems [3-4]. This course presents foundations of fractal (polyscale) and chaos theory, with applications to engineering. A unified approach to fractal dimensions provides tools for multiscale and polyscale analysis of time series, images, video, and other objects. Other topics include analysis and synthesis of mono- and multifractal coloured noise for research purposes, as well as stability analysis of dynamical systems, characterization of chaos using Lyapunov exponents, and reconstruction of strange attractors from experimental data. The course also provides a unified description of 19 different fractal dimensions grouped in four classes based on: set- morphology, entropy, spectrum, and variance. Special attention is given to (a) probability and pair-correlation algorithms for E-dimensional images and strange attractors, (b) batch and real-time computation of the variance fractal dimension, and (c) the Rényi dimension spectrum formulation for fractals and multifractals. The objective is to learn how to characterize multifractals through multi- and poly-scale analyses, and how to extract features for their classification. This paper describes the structure of the course, the set of topics covered, the set of course projects, and the lessons learned from the extensive experience with the course.


2021 ◽  
pp. 111341
Author(s):  
Jerrald L. Rector ◽  
Sanne M.W. Gijzel ◽  
Ingrid A. van de Leemput ◽  
Fokke B. van Meulen ◽  
Marcel G.M. Olde Rikkert ◽  
...  

2019 ◽  
Vol 11 (13) ◽  
pp. 3546 ◽  
Author(s):  
Wei He ◽  
Yuan Fang ◽  
Reza Malekian ◽  
Zhixiong Li

With the quick penetration of Internet applications, online media have become an important carrier of public opinions. The opinions and comments expressed by young college students—one of the most active netizen groups—on the Internet have turned out to be an essential part of the online public opinions in colleges and universities. However, the existing systems generally employ simple statistical methods to analyze the effect of online public opinions on the image and reputation development of colleges and universities without taking account of other factors, such as the hotness characteristics of online public opinions and semantic information. Therefore, on the basis of Public Opinion Hotness Index and time series-based trend analysis, as well as the topics extracted using the latent Dirichlet allocation (LDA) topic model, this study aims to improve the analysis performance on the online public opinions in colleges and universities using short-term trend prediction results. The experience and lessons learned from a real case may provide strong data support and feasible suggestions for colleges and universities in analyzing and guiding the online public opinions.


2021 ◽  
Vol 11 (24) ◽  
pp. 11932
Author(s):  
Dieter De Paepe ◽  
Sander Vanden Hautte ◽  
Bram Steenwinckel ◽  
Pieter Moens ◽  
Jasper Vaneessen ◽  
...  

Companies are increasingly gathering and analyzing time-series data, driven by the rising number of IoT devices. Many works in literature describe analysis systems built using either data-driven or semantic (knowledge-driven) techniques. However, little to no works describe hybrid combinations of these two. Dyversify, a collaborative project between industry and academia, investigated how event and anomaly detection can be performed on time-series data in such a hybrid setting. We built a proof-of-concept analysis platform, using a microservice architecture to ensure scalability and fault-tolerance. The platform comprises time-series ingestion, long term storage, data semantification, event detection using data-driven and semantic techniques, dynamic visualization, and user feedback. In this work, we describe the system architecture of this hybrid analysis platform and give an overview of the different components and their interactions. As such, the main contribution of this work is an experience report with challenges faced and lessons learned.


Sign in / Sign up

Export Citation Format

Share Document