Real-Scenario Testing of an Active Phasor Data Concentrator

Author(s):  
Paolo Castello ◽  
Carlo Muscas ◽  
Paolo Attilio Pegoraro ◽  
Asja Derviskadic ◽  
Guglielmo Frigo ◽  
...  
Smart Cities ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 1058-1086
Author(s):  
Franklin Oliveira ◽  
Daniel G. Costa ◽  
Luciana Lima ◽  
Ivanovitch Silva

The fast transformation of the urban centers, pushed by the impacts of climatic changes and the dramatic events of the COVID-19 Pandemic, will profoundly influence our daily mobility. This resulted scenario is expected to favor adopting cleaner and flexible modal solutions centered on bicycles and scooters, especially as last-mile options. However, as the use of bicycles has rapidly increased, cyclists have been subject to adverse conditions that may affect their health and safety when cycling in urban areas. Therefore, whereas cities should implement mechanisms to monitor and evaluate adverse conditions in cycling paths, cyclists should have some effective mechanism to visualize the indirect quality of cycling paths, eventually supporting choosing more appropriate routes. Therefore, this article proposes a comprehensive multi-parameter system based on multiple independent subsystems, covering all phases of data collecting, formatting, transmission, and processing related to the monitoring, evaluating, and visualizing the quality of cycling paths in the perspective of adverse conditions that affect cyclist. The formal interactions of all modules are carefully described, as well as implementation and deployment details. Additionally, a case study is considered for a large city in Brazil, demonstrating how the proposed system can be adopted in a real scenario.


Aerospace ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 113
Author(s):  
Pedro Andrade ◽  
Catarina Silva ◽  
Bernardete Ribeiro ◽  
Bruno F. Santos

This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario using maintenance data from 45 aircraft. The maintenance plan that is generated with our approach is compared with a previous study, which presented a Dynamic Programming (DP) based approach and airline estimations for the same period. The results show a reduction in the number of checks scheduled, which indicates the potential of RL in solving this problem. The adaptability of RL is also tested by introducing small disturbances in the initial conditions. After training the model with these simulated scenarios, the results show the robustness of the RL approach and its ability to generate efficient maintenance plans in only a few seconds.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3953 ◽  
Author(s):  
Bruno Abade ◽  
David Perez Abreu ◽  
Marilia Curado

Smart Environments try to adapt their conditions focusing on the detection, localisation, and identification of people to improve their comfort. It is common to use different sensors, actuators, and analytic techniques in this kind of environments to process data from the surroundings and actuate accordingly. In this research, a solution to improve the user’s experience in Smart Environments based on information obtained from indoor areas, following a non-intrusive approach, is proposed. We used Machine Learning techniques to determine occupants and estimate the number of persons in a specific indoor space. The solution proposed was tested in a real scenario using a prototype system, integrated by nodes and sensors, specifically designed and developed to gather the environmental data of interest. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. Additionally, the analysis performed over the gathered data using Machine Learning and pattern recognition mechanisms shows that it is possible to determine the occupancy of indoor environments.


2019 ◽  
Vol 31 (1) ◽  
Author(s):  
Suzana Margareth Lobo ◽  
Ederlon Rezende ◽  
Ciro Leite Mendes ◽  
Mirella Cristinne de Oliveira
Keyword(s):  

2021 ◽  
Author(s):  
Vítor Alcácer ◽  
Carolina Rodrigues ◽  
Helena Carvalho ◽  
Virgilio Cruz-Machado

Abstract In order to track industry 4.0 status, readiness models can be used to analyze the state of indus-try 4.0 technologies’ implementation allowing the quantification and qualification of its readiness level, focusing on different dimensions. To this matter, there are companies unable to relate the industry 4.0 with their business models, leading to a lack of a correct self-assess in order to understand the reached readiness level. Not all companies are adopting these new technologies with the same ease and with the same pace. Into this purpose, it is important to understand how to assess the industry 4.0’ readiness so far and what are the barriers on the adoption of these enabling technologies by the industry. This paper aims to assess the industry 4.0’ readiness level of companies, understand the perception of companies due to the barriers on the adoption of industry 4.0 enabling technologies and bring new barriers for discussion on academic community. To this end, empirical data was collected on a sample of 15 companies belonging to an important industrial cluster in Portugal.


Author(s):  
Y. Xu ◽  
S. Tuttas ◽  
L. Heogner ◽  
U. Stilla

This paper presents an approach for the classification of photogrammetric point clouds of scaffolding components in a construction site, aiming at making a preparation for the automatic monitoring of construction site by reconstructing an as-built Building Information Model (as-built BIM). The points belonging to tubes and toeboards of scaffolds will be distinguished via subspace clustering process and principal components analysis (PCA) algorithm. The overall workflow includes four essential processing steps. Initially, the spherical support region of each point is selected. In the second step, the normalized cut algorithm based on spectral clustering theory is introduced for the subspace clustering, so as to select suitable subspace clusters of points and avoid outliers. Then, in the third step, the feature of each point is calculated by measuring distances between points and the plane of local reference frame defined by PCA in cluster. Finally, the types of points are distinguished and labelled through a supervised classification method, with random forest algorithm used. The effectiveness and applicability of the proposed steps are investigated in both simulated test data and real scenario. The results obtained by the two experiments reveal that the proposed approaches are qualified to the classification of points belonging to linear shape objects having different shapes of sections. For the tests using synthetic point cloud, the classification accuracy can reach 80%, with the condition contaminated by noise and outliers. For the application in real scenario, our method can also achieve a classification accuracy of better than 63%, without using any information about the normal vector of local surface.


2020 ◽  
Author(s):  
Jose Francisco Meneses-Echavez ◽  
Sarah Rosenbaum ◽  
Gabriel Rada ◽  
Signe Flottorp ◽  
Jenny Moberg ◽  
...  

Abstract Background: Evidence to Decision (EtD) frameworks bring clarity, structure and transparency to health care decision making. The interactive Evidence to Decision (iEtD) tool, developed in the context of the DECIDE project and published by Epistemonikos, is a stand-alone online solution for producing and using EtD frameworks. Since its development, little is known about how organizations have been using the iEtD tool and what characterizes users’ experiences with it.Methods: This study aimed to describe users’ experiences with the iEtD and identify main barriers and facilitators related to use. We contacted all users registered in the iEtD via email and invited people who identified themselves as having used the solution to a semi-structured interview. Audio recordings were transcribed, and one researcher conducted a content analysis of the interviews guided by a user experience framework. Two researchers checked the content independently for accuracy. Results: Out of 860 people contacted, 81 people replied to our introductory email (response rate 9.4%). Twenty of these had used the tool in a real scenario and were invited to an interview. We interviewed all eight users that accepted this invitation (from six countries, four continents). ‘Guideline development’ was the iEtD use scenario they most commonly identified. Most participants reported an overall positive experience, without major difficulties navigating or using the different sections. They reported having used most of the EtD framework criteria. Participants reported tailoring their frameworks, for instance by adding or deleting criteria, translating to another language, or rewording headings. Several people preferred to produce a Word version rather than working online, due to the burden of completing the framework, or lack of experience with the tool. Some reported difficulties working with the exportable formats, as they needed considerable editing.Conclusion: A very limited number of guideline developers have used the iEtD tool published by Epistemonikos since its development. Although users’ general experiences are positive, our work has identified some aspects of the tool that need improvement. Our findings could be also applied to development or improvement of other solutions for producing or using EtD frameworks.


Author(s):  
Indira Lanza-Cruz ◽  
Rafael Berlanga ◽  
María José Aramburu

Social Business Intelligence (SBI) enables companies to capture strategic information from public social networks. Contrary to traditional Business Intelligence (BI), SBI has to face the high dynamicity of both the social network contents and the company analytical requests, as well as the enormous amount of noisy data. Effective exploitation of these continuous sources of data requires efficient processing of the streamed data to be semantically shaped into insightful facts. In this paper, we propose a multidimensional formalism to represent and evaluate social indicators directly from fact streams derived in turn from social network data. This formalism relies on two main aspects: the semantic representation of facts via Linked Open Data and the support of OLAP-like multidimensional analysis models. Contrary to traditional BI formalisms, we start the process by modeling the required social indicators according to the strategic goals of the company. From these specifications, all the required fact streams are modeled and deployed to trace the indicators. The main advantages of this approach are the easy definition of on-demand social indicators, and the treatment of changing dimensions and metrics through streamed facts. We demonstrate its usefulness by introducing a real scenario user case in the automotive sector.


Sign in / Sign up

Export Citation Format

Share Document