scholarly journals The Design and Development of a Ship Trajectory Data Management and Analysis System Based on AIS

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 310
Author(s):  
Chengxu Feng ◽  
Bing Fu ◽  
Yasong Luo ◽  
Houpu Li

To address the data storage, management, analysis, and mining of ship targets, the object-oriented method was employed to design the overall structure and functional modules of a ship trajectory data management and analysis system (STDMAS). This paper elaborates the detailed design and technical information of the system’s logical structure, module composition, physical deployment, and main functional modules such as database management, trajectory analysis, trajectory mining, and situation analysis. A ship identification method based on the motion features was put forward. With the method, ship trajectory was first partitioned into sub-trajectories in various behavioral patterns, and effective motion features were then extracted. Machine learning algorithms were utilized for training and testing to identify many types of ships. STDMAS implements such functions as database management, trajectory analysis, historical situation review, and ship identification and outlier detection based on trajectory classification. STDMAS can satisfy the practical needs for the data management, analysis, and mining of maritime targets because it is easy to apply, maintain, and expand.

2019 ◽  
Vol 11 (1) ◽  
pp. 10 ◽  
Author(s):  
Jiwei Qin ◽  
Liangli Ma ◽  
Jinghua Niu

The rapid development of distributed technology has made it possible to store and query massive trajectory data. As a result, a variety of schemes for big trajectory data management have been proposed. However, the factor of data transmission is not considered in most of these, resulting in a certain impact on query efficiency. In view of that, we present THBase, a coprocessor-based scheme for big trajectory data management in HBase. THBase introduces a segment-based data model and a moving-object-based partition model to solve massive trajectory data storage, and exploits a hybrid local secondary index structure based on Observer coprocessor to accelerate spatiotemporal queries. Furthermore, it adopts certain maintenance strategies to ensure the colocation of relevant data. Based on these, THBase designs node-locality-based parallel query algorithms by Endpoint coprocessor to reduce the overhead caused by data transmission, thus ensuring efficient query performance. Experiments on datasets of ship trajectory show that our schemes can significantly outperform other schemes.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Galip Aydin ◽  
Ibrahim Riza Hallac ◽  
Betul Karakus

Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.


2014 ◽  
Vol 687-691 ◽  
pp. 2698-2701
Author(s):  
Ying Qun Zhao ◽  
Ying Liu

Database is the main platform of data management and processing at present, which changes the traditional database management and improve the data management efficiency. However, with the increasing amount of data processing, database faces many problems, data storage capacity expansion, analysis of some data call, etc. In the information age, a mass of data causes the deficit of database management. Therefore, this paper analyzes data management of the database in perspective of capacity expansion and high-efficient call and put forward effective solutions.


Author(s):  
J. Li ◽  
J. Q. Liu ◽  
X. L. Mei ◽  
W. T. Sun ◽  
Q. Huang ◽  
...  

Abstract. The trajectory data generated by various position-aware devices is widely used in various fields of society, but its conventional vector representation and various analysis algorithms based on it have high computational complexity. This makes it difficult to meet the application requirements of real-time or near real-time management and analysis of large-scale trajectory data. In view of the above challenges, this paper proposes a trajectory data management and analysis technology framework based on the Spatiotemporal Grid Model (STGM). First, the trajectory data is represented by spatiotemporal grid encoding instead of vector coordinates, and it can achieve dimensionality reduction and integrated management of high-dimensional heterogeneous trajectory data. Second, the trajectory computing and analysis methods based on STGM are introduced, which reduce the computing complexity of algorithms. Furthermore, various types of trajectory mining and applications are realized on the basis of high-performance computing technologies. Finally, a trajectory data management and analysis prototype system based on the STGM is developed, and experimental results verify the reliability and effectiveness of the proposed technology framework.


2020 ◽  
Vol 39 (4) ◽  
pp. 5905-5914
Author(s):  
Chen Gong

Most of the research on stressors is in the medical field, and there are few analysis of athletes’ stressors, so it can not provide reference for the analysis of athletes’ stressors. Based on this, this study combines machine learning algorithms to analyze the pressure source of athletes’ stadium. In terms of data collection, it is mainly obtained through questionnaire survey and interview form, and it is used as experimental data after passing the test. In order to improve the performance of the algorithm, this paper combines the known K-Means algorithm with the layering algorithm to form a new improved layered K-Means algorithm. At the same time, this paper analyzes the performance of the improved hierarchical K-Means algorithm through experimental comparison and compares the clustering results. In addition, the analysis system corresponding to the algorithm is constructed based on the actual situation, the algorithm is applied to practice, and the user preference model is constructed. Finally, this article helps athletes find stressors and find ways to reduce stressors through personalized recommendations. The research shows that the algorithm of this study is reliable and has certain practical effects and can provide theoretical reference for subsequent related research.


2021 ◽  
pp. 1-15
Author(s):  
O. Basturk ◽  
C. Cetek

ABSTRACT In this study, prediction of aircraft Estimated Time of Arrival (ETA) is proposed using machine learning algorithms. Accurate prediction of ETA is important for management of delay and air traffic flow, runway assignment, gate assignment, collaborative decision making (CDM), coordination of ground personnel and equipment, and optimisation of arrival sequence etc. Machine learning is able to learn from experience and make predictions with weak assumptions or no assumptions at all. In the proposed approach, general flight information, trajectory data and weather data were obtained from different sources in various formats. Raw data were converted to tidy data and inserted into a relational database. To obtain the features for training the machine learning models, the data were explored, cleaned and transformed into convenient features. New features were also derived from the available data. Random forests and deep neural networks were used to train the machine learning models. Both models can predict the ETA with a mean absolute error (MAE) less than 6min after departure, and less than 3min after terminal manoeuvring area (TMA) entrance. Additionally, a web application was developed to dynamically predict the ETA using proposed models.


GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Daniel Arend ◽  
Patrick König ◽  
Astrid Junker ◽  
Uwe Scholz ◽  
Matthias Lange

Abstract Background The FAIR data principle as a commitment to support long-term research data management is widely accepted in the scientific community. Although the ELIXIR Core Data Resources and other established infrastructures provide comprehensive and long-term stable services and platforms for FAIR data management, a large quantity of research data is still hidden or at risk of getting lost. Currently, high-throughput plant genomics and phenomics technologies are producing research data in abundance, the storage of which is not covered by established core databases. This concerns the data volume, e.g., time series of images or high-resolution hyper-spectral data; the quality of data formatting and annotation, e.g., with regard to structure and annotation specifications of core databases; uncovered data domains; or organizational constraints prohibiting primary data storage outside institional boundaries. Results To share these potentially dark data in a FAIR way and master these challenges the ELIXIR Germany/de.NBI service Plant Genomic and Phenomics Research Data Repository (PGP) implements a “bring the infrastructure to the data” approach, which allows research data to be kept in place and wrapped in a FAIR-aware software infrastructure. This article presents new features of the e!DAL infrastructure software and the PGP repository as a best practice on how to easily set up FAIR-compliant and intuitive research data services. Furthermore, the integration of the ELIXIR Authentication and Authorization Infrastructure (AAI) and data discovery services are introduced as means to lower technical barriers and to increase the visibility of research data. Conclusion The e!DAL software matured to a powerful and FAIR-compliant infrastructure, while keeping the focus on flexible setup and integration into existing infrastructures and into the daily research process.


Author(s):  
Ruiyuan Li ◽  
Huajun He ◽  
Rubin Wang ◽  
Sijie Ruan ◽  
Tianfu He ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Joël L. Lavanchy ◽  
Joel Zindel ◽  
Kadir Kirtac ◽  
Isabell Twick ◽  
Enes Hosgor ◽  
...  

AbstractSurgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.


2021 ◽  
pp. 1-10
Author(s):  
Wan Hongmei ◽  
Tang Songlin

In order to improve the efficiency of sentiment analysis of students in ideological and political classrooms, under the guidance of artificial intelligence ideas, this paper combines data mining and machine learning algorithms to improve and propose a method for quantifying the semantic ambiguity of sentiment words. Moreover, this paper designs different quantitative calculation methods of sentiment polarity intensity, and constructs video image sentiment recognition, text sentiment recognition, and speech sentiment recognition functional modules to obtain a combined sentiment recognition model. In addition, this article studies student emotions in ideological and political classrooms from the perspective of multimodal transfer learning, and optimizes the deep representation of images and texts and their corresponding deep networks through single-depth discriminative correlation analysis. Finally, this paper designs experiments to verify the model effect from two perspectives of single factor sentiment analysis and multi-factor sentiment analysis. The research results show that comprehensive analysis of multiple factors can effectively improve the effect of sentiment analysis of students in ideological and political classrooms, and enhance the effect of ideological and political classroom teaching.


Sign in / Sign up

Export Citation Format

Share Document