scholarly journals Distributed data collection and storage algorithms for collaborative learning vision sensor devices with applications to pilgrimage

2012 ◽  
Vol 12 (3) ◽  
pp. 137 ◽  
Author(s):  
Salah A. Aly
2019 ◽  
Vol 214 ◽  
pp. 04010
Author(s):  
Álvaro Fernández Casaní ◽  
Dario Barberis ◽  
Javier Sánchez ◽  
Carlos García Montoro ◽  
Santiago González de la Hoz ◽  
...  

The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporarily maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata, in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or extension of data. In this sense the EventIndex may evolve towards a generalized whiteboard, with the ability to build collections and virtual datasets for end users. This proceedings describes the current Distributed Data Collection Architecture of the ATLAS EventIndex project, with details of the Producer, Consumer and Supervisor entities, and the protocol and information temporarily stored in the ObjectStore. It also shows the data flow rates and performance achieved since the new Object Store as temporary store approach was put in production in July 2017. We review the challenges imposed by the expected increasing rates that will reach 35 billion new real events per year in Run 3, and 100 billion new real events per year in Run 4. For simulated events the numbers are even higher, with 100 billion events/year in run 3, and 300 billion events/year in run 4. We also outline the challenges we face in order to accommodate future use cases in the EventIndex.


Author(s):  
Cristina G. Wilson ◽  
Feifei Qian ◽  
Douglas J. Jerolmack ◽  
Sonia Roberts ◽  
Jonathan Ham ◽  
...  

AbstractHow do scientists generate and weight candidate queries for hypothesis testing, and how does learning from observations or experimental data impact query selection? Field sciences offer a compelling context to ask these questions because query selection and adaptation involves consideration of the spatiotemporal arrangement of data, and therefore closely parallels classic search and foraging behavior. Here we conduct a novel simulated data foraging study—and a complementary real-world case study—to determine how spatiotemporal data collection decisions are made in field sciences, and how search is adapted in response to in-situ data. Expert geoscientists evaluated a hypothesis by collecting environmental data using a mobile robot. At any point, participants were able to stop the robot and change their search strategy or make a conclusion about the hypothesis. We identified spatiotemporal reasoning heuristics, to which scientists strongly anchored, displaying limited adaptation to new data. We analyzed two key decision factors: variable-space coverage, and fitting error to the hypothesis. We found that, despite varied search strategies, the majority of scientists made a conclusion as the fitting error converged. Scientists who made premature conclusions, due to insufficient variable-space coverage or before the fitting error stabilized, were more prone to incorrect conclusions. We found that novice undergraduates used the same heuristics as expert geoscientists in a simplified version of the scenario. We believe the findings from this study could be used to improve field science training in data foraging, and aid in the development of technologies to support data collection decisions.


Author(s):  
Г.В. Петрухнова ◽  
И.Р. Болдырев

Представлен комплекс технических средств создания для системы сбора данных. Проведена формализация процессов реализации функций контроля технического объекта. Рассматриваемая система сбора данных состоит из функционально законченных устройств, выполняющих определенные функции в контексте работы системы. Данная система, с одной стороны, может быть одним из узлов распределенной системы сбора данных, с другой стороны, может использоваться автономно. Показана актуальность создания системы. В основе разработки использован RISC микроконтроллер STM32H743VIT6, семейства ARM Cortex-M7, работающий на частоте до 400 МГц. К основным модулям системы относятся 20-входовый распределитель напряжения; модуль питания и настройки; модуль цифрового управления; модуль анализа, хранения и передачи данных в управляющий компьютер. Рассмотрен состав и назначение этих модулей. За сбор данных в рассматриваемой системе отвечает цепочка устройств: датчик - схема согласования - АЦП - микроконтроллер. Поскольку в составе системы имеются не только АЦП, но и ЦАП, то на ее базе может быть реализована система управления объектом. Выбор датчиков для снятия информации обусловлен особенностями объекта контроля. Имеется возможность в ручном режиме измерять электрические параметры контуров связи, в том числе обеспечивать проверку питания IDE и SATA-устройств. Представленная система сбора данных является средством, которое может быть использовано для автоматизации процессов контроля состояния технических объектов We present a set of technical means for creating a data collection system. We carried out the formalization of the processes of implementing the control functions of a technical object. The multifunctional data collection system consists of functionally complete devices that perform certain functions in the context of the system operation. This system, on the one hand, can be one of the nodes of a distributed data collection system, on the other hand, it can be used autonomously. We show the relevance of the system creation. The development is based on the RISC microcontroller STM32H743VIT6, ARM Cortex-M7 family, operating at a frequency of up to 400 MHz. The main modules of the system include: a 20-input voltage distributor; a power supply and settings module; a digital control module; a module for analyzing, storing and transmitting data to a control computer. We considered the composition and purpose of these modules. A chain of devices is responsible for data collection in the system under consideration: sensor - matching circuit - ADC - microcontroller. Since the system includes not only an ADC but also a DAC, an object management system can be implemented on its basis. The choice of sensors for taking information is due to the characteristics of the object of control. It is possible to manually measure the electrical parameters of the communication circuits, including checking the power supply of IDE and SATA devices. The presented data collection system is a tool that can be used to automate the processes of monitoring the condition of technical facilities


2014 ◽  
Vol 14 (4) ◽  
pp. 901-916 ◽  
Author(s):  
D. Molinari ◽  
S. Menoni ◽  
G. T. Aronica ◽  
F. Ballio ◽  
N. Berni ◽  
...  

Abstract. In recent years, awareness of a need for more effective disaster data collection, storage, and sharing of analyses has developed in many parts of the world. In line with this advance, Italian local authorities have expressed the need for enhanced methods and procedures for post-event damage assessment in order to obtain data that can serve numerous purposes: to create a reliable and consistent database on the basis of which damage models can be defined or validated; and to supply a comprehensive scenario of flooding impacts according to which priorities can be identified during the emergency and recovery phase, and the compensation due to citizens from insurers or local authorities can be established. This paper studies this context, and describes ongoing activities in the Umbria and Sicily regions of Italy intended to identifying new tools and procedures for flood damage data surveys and storage in the aftermath of floods. In the first part of the paper, the current procedures for data gathering in Italy are analysed. The analysis shows that the available knowledge does not enable the definition or validation of damage curves, as information is poor, fragmented, and inconsistent. A new procedure for data collection and storage is therefore proposed. The entire analysis was carried out at a local level for the residential and commercial sectors only. The objective of the next steps for the research in the short term will be (i) to extend the procedure to other types of damage, and (ii) to make the procedure operational with the Italian Civil Protection system. The long-term aim is to develop specific depth–damage curves for Italian contexts.


Author(s):  
Ana Nobre ◽  
Vasco Nobre

The technologies themselves cannot be analyzed as instruments per se, nor can they be exhausted in their relation with science. There is a social and even an individual dimension that affects our own way of relating to society. It is in open education that we have been developing our educational practices. This chapter presents a collaborative learning activity, the curricular unit Materiais e Recursos para eLearning, part of an on-line Master in Pedagogy of eLearning, Universidade Aberta, Portugal. In the present work, the authors dedicate their attention to co-learning and co-research, as processes that help to exemplify some situations, the a-REAeduca. The data collection was supported essentially by the content analysis technique.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Deka Anggawira ◽  
Tamara Adriani Salim

This study discusses the implementation of local wisdom in the preservation of manuscripts at Universitas Indonesia’s library. The purpose of this study is to identify the implementation of local wisdom in the preservation of manuscripts in that library. This research uses a qualitative approach coupled with direct observation and structured interviews as data collection methods. The results of this study indicate that Universitas Indonesia Library has implemented local wisdom in preserving manuscripts. This can be seen from the use of local wisdom in the storage process, including the design of the rooms and storage facilities and the pattern of behavior in its storage process. The maintenance process of local wisdom includes the control of the environment using traditional approaches and the use of traditional materials in the maintenance of manuscripts. Another finding is that the process of capturing or inheriting knowledge from a previous manuscript is based on the manpower manifested in its preservation behavior. Therefore, it can be understood that the implementation of local wisdom in the process of preservation of manuscripts in UI Library is based on the preservation of knowledge from previous manuscript managers or librarians.


2021 ◽  
Vol 6 (1(34)) ◽  
pp. 27-29
Author(s):  
D.A. Oskin ◽  
A.A. Gorshkov ◽  
S.A. Klimenko

The principles of construction and operation of the data collection and transmission system (DCTS) of an unmanned vessel are considered. A two-stage formulation of the problem of constructing a DCTS is proposed: the choice of methods and means of transmitting data from sensors to the data collection system, the choice of a method for transmitting data to the data collection system, the implementation of data transmission channels and, directly, the organization of data collection and storage for use in the autopilot device, implementation systems for receiving and synchronizing data and their placement in the storage system.


Sign in / Sign up

Export Citation Format

Share Document