Intersectoral information and intellectual model of integration of information flows of interregional level subjects

Author(s):  
G. A. Galkin ◽  
◽  
V. V. Zubkov ◽  
N. F. Sirina ◽  
◽  
...  

Modern challenges actualize the search for transformation of management and interaction mechanisms and tools for determining and making optimal managerial decisions based on quota (distributed) information and intellectual content, which create conditions for the implementation of a systematic method for solving the problem of creating a large-scale and multifunctional complex of information and intellectual management and interaction systems. In this paper, the principles of building an intersectoral information and intellectual model of integration of information flows of subjects of the interregional level are considered. The construction of the model is based on the principles of information and intellectual interaction in the segment of the information economy of social and marketing cooperation. The components of the model formation methodology are presented. The system of component-methodology assumes concentration in one perimeter of information flows coming from the integrating information systems of subjects of the intersectoral, interregional level, federal regulators and business associations, using the information technology «big data». The model of integration of information flows accumulates and concentrates information knowledge in a single transport and information space and is a quota-based information and intellectual system, the structure of which reflects the coordinated interactions of virtual agents. The subjects of accumulation and concentration of information knowledge are virtual agents (integrating information and intellectual systems) of regulators, subjects and business associations. The creation of such virtual agents is based on the multiagent construction principle. Virtual agents are a new category of software products that operate in information and intellectual environment on behalf of the user. The principles of virtual agent modeling and methodology for constructing the model were used in creating software products «Electronic Service of integrated transport services» and «Multiagent intellectual system for management of transport services in rail-marine traffic», which were registered by the state.

Author(s):  
О. Кravchuk ◽  
V. Symonenkov ◽  
I. Symonenkova ◽  
O. Hryhorev

Today, more than forty countries of the world are engaged in the development of military-purpose robots. A number of unique mobile robots with a wide range of capabilities are already being used by combat and intelligence units of the Armed forces of the developed world countries to conduct battlefield intelligence and support tactical groups. At present, the issue of using the latest information technology in the field of military robotics is thoroughly investigated, and the creation of highly effective information management systems in the land-mobile robotic complexes has acquired a new phase associated with the use of distributed information and sensory systems and consists in the transition from application of separate sensors and devices to the construction of modular information subsystems, which provide the availability of various data sources and complex methods of information processing. The purpose of the article is to investigate the ways to increase the autonomy of the land-mobile robotic complexes using in a non-deterministic conditions of modern combat. Relevance of researches is connected with the necessity of creation of highly effective information and control systems in the perspective robotic means for the needs of Land Forces of Ukraine. The development of the Armed Forces of Ukraine management system based on the criteria adopted by the EU and NATO member states is one of the main directions of increasing the effectiveness of the use of forces (forces), which involves achieving the principles and standards necessary for Ukraine to become a member of the EU and NATO. The inherent features of achieving these criteria will be the transition to a reduction of tasks of the combined-arms units and the large-scale use of high-precision weapons and land remote-controlled robotic devices. According to the views of the leading specialists in the field of robotics, the automation of information subsystems and components of the land-mobile robotic complexes can increase safety, reliability, error-tolerance and the effectiveness of the use of robotic means by standardizing the necessary actions with minimal human intervention, that is, a significant increase in the autonomy of the land-mobile robotic complexes for the needs of Land Forces of Ukraine.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
David March ◽  
Kristian Metcalfe ◽  
Joaquin Tintoré ◽  
Brendan J. Godley

AbstractThe COVID-19 pandemic has resulted in unparalleled global impacts on human mobility. In the ocean, ship-based activities are thought to have been impacted due to severe restrictions on human movements and changes in consumption. Here, we quantify and map global change in marine traffic during the first half of 2020. There were decreases in 70.2% of Exclusive Economic Zones but changes varied spatially and temporally in alignment with confinement measures. Global declines peaked in April, with a reduction in traffic occupancy of 1.4% and decreases found across 54.8% of the sampling units. Passenger vessels presented more marked and longer lasting decreases. A regional assessment in the Western Mediterranean Sea gave further insights regarding the pace of recovery and long-term changes. Our approach provides guidance for large-scale monitoring of the progress and potential effects of COVID-19 on vessel traffic that may subsequently influence the blue economy and ocean health.


2021 ◽  
Author(s):  
Michele Allegra ◽  
Chiara Favaretto ◽  
Nicholas Metcalf ◽  
Maurizio Corbetta ◽  
Andrea Brovelli

ABSTRACTNeuroimaging and neurological studies suggest that stroke is a brain network syndrome. While causing local ischemia and cell damage at the site of injury, stroke strongly perturbs the functional organization of brain networks at large. Critically, functional connectivity abnormalities parallel both behavioral deficits and functional recovery across different cognitive domains. However, the reasons for such relations remain poorly understood. Here, we tested the hypothesis that alterations in inter-areal communication underlie stroke-related modulations in functional connectivity (FC). To this aim, we used resting-state fMRI and Granger causality analysis to quantify information transfer between brain areas and its alteration in stroke. Two main large-scale anomalies were observed in stroke patients. First, inter-hemispheric information transfer was strongly decreased with respect to healthy controls. Second, information transfer within the affected hemisphere, and from the affected to the intact hemisphere was reduced. Both anomalies were more prominent in resting-state networks related to attention and language, and they were correlated with impaired performance in several behavioral domains. Overall, our results support the hypothesis that stroke perturbs inter-areal communication within and across hemispheres, and suggest novel therapeutic approaches aimed at restoring normal information flow.SIGNIFICANCE STATEMENTA thorough understanding of how stroke perturbs brain function is needed to improve recovery from the severe neurological syndromes affecting stroke patients. Previous resting-state neuroimaging studies suggested that interaction between hemispheres decreases after stroke, while interaction between areas of the same hemisphere increases. Here, we used Granger causality to reconstruct information flows in the brain at rest, and analyze how stroke perturbs them. We showed that stroke causes a global reduction of inter-hemispheric communication, and an imbalance between the intact and the affected hemisphere: information flows within and from the latter are impaired. Our results may inform the design of stimulation therapies to restore the functional balance lost after stroke.


This chapter discusses lifecycle model application for software development. It compares the benefits and shortcomings of different models. The authors argue that there is no universal lifecycle model. For agility, this chapter recommends combining prototyping with the other models. The authors suggest this to achieve a common understanding of the key product features and to reduce project risks. The lifecycle model choice determines project economics and time to market. The model also influences product quality and overall project success. However, product success essentially depends on human factors. The authors analyze the applicability of the lifecycle models to large-scale, mission-critical software systems. Finally, this chapter introduces a methodology. It includes a spiral-like lifecycle and a set of formal models and visual tools for software product development. This methodology helps to optimize the software product lifecycle. It fits large-scale, complex heterogeneous software products.


Author(s):  
Yigal Rosen ◽  
Maryam Mosharraf

Often in our daily lives we learn and work in groups. In recognition of the importance of collaborative and problem solving skills, educators are realizing the need for effective and scalable learning and assessment solutions to promote the skillset in educational systems. In the settings of a comprehensive collaborative problem solving assessment, each student should be matched with various types of group members and must apply the skills in varied contexts and tasks. One solution to these assessment demands is to use computer-based (virtual) agents to serve as the collaborators in the interactions with students. The chapter presents the premises and challenges in the use of computer agents in the assessment of collaborative problem solving. Directions for future research are discussed in terms of their implications to large-scale assessment programs.


Author(s):  
Ruey-Shiang Shaw ◽  
Sheng-Pao Shih ◽  
Ta-Yu Fu ◽  
Chia-Wen Tsai

The software industry faces drastic changes in technology and business operations. The research structure of this study is based on the business model for software industries proposed by Rajala in 2003. The researcher employed an ex post facto research design to conduct a case study of the Galaxy Software Service Co., a company that is representative of the software industry in Taiwan. The main research goal of this study is to explore how this particular company developed into a large software company in the Taiwanese software sector, which is characterized by a prevalence of small- and medium-sized businesses, over a period of 25 years. This study employs a case study design and relies on in-depth participation and interviews to acquire a complete data set of the company’s internal operations. The evolution of the business model from the company’s inception until the present day has been divided into four phases: the entrepreneur phase, the growth phase, the stable phase, and the innovative breakthrough phase. The company developed into a major player in the software industry for 3 reasons: it has always insisted on a product differentiation strategy based on the sole reliance on software products, it started out as a software products dealer and gradually developed its own research and development capability, and it built a large-scale project management capability and received CMMI certification. These factors make the company stand out from other System Integrated businesses in the Taiwanese software sector offering both hardware and software products.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1006 ◽  
Author(s):  
Charikleia Papatsimpa ◽  
Jean-Paul Linnartz

Smart buildings with connected lighting and sensors are likely to become one of the first large-scale applications of the Internet of Things (IoT). However, as the number of interconnected IoT devices is expected to rise exponentially, the amount of collected data will be enormous but highly redundant. Devices will be required to pre-process data locally or at least in their vicinity. Thus, local data fusion, subject to constraint communications will become necessary. In that sense, distributed architectures will become increasingly unavoidable. Anticipating this trend, this paper addresses the problem of presence detection in a building as a distributed sensing of a hidden Markov model (DS-HMM) with limitations on the communication. The key idea in our work is the use of a posteriori probabilities or likelihood ratios (LR) as an appropriate “interface” between heterogeneous sensors with different error profiles. We propose an efficient transmission policy, jointly with a fusion algorithm, to merge data from various HMMs running separately on all sensor nodes but with all the models observing the same Markovian process. To test the feasibility of our DS-HMM concept, a simple proof-of-concept prototype was used in a typical office environment. The experimental results show full functionality and validate the benefits. Our proposed scheme achieved high accuracy while reducing the communication requirements. The concept of DS-HMM and a posteriori probabilities as an interface is suitable for many other applications for distributed information fusion in wireless sensor networks.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hongli Dong ◽  
Zidong Wang ◽  
Steven X. Ding ◽  
Huijun Gao

In recent years, theoretical and practical research on large-scale networked systems has gained an increasing attention from multiple disciplines including engineering, computer science, and mathematics. Lying in the core part of the area are the distributed estimation and fault detection problems that have recently been attracting growing research interests. In particular, an urgent need has arisen to understand the effects of distributed information structures on filtering and fault detection in sensor networks. In this paper, a bibliographical review is provided on distributed filtering and fault detection problems over sensor networks. The algorithms employed to study the distributed filtering and detection problems are categorised and then discussed. In addition, some recent advances on distributed detection problems for faulty sensors and fault events are also summarized in great detail. Finally, we conclude the paper by outlining future research challenges for distributed filtering and fault detection for sensor networks.


2019 ◽  
Vol 9 (11) ◽  
pp. 2212 ◽  
Author(s):  
Fazal Qudus Khan ◽  
Shahrulniza Musa ◽  
Georgios Tsaramirsis ◽  
Seyed M. Buhari

Software Product Lines (SPLs) can aid modern ecosystems by rapidly developing large-scale software applications. SPLs produce new software products by combining existing components that are considered as features. Selection of features is challenging due to the large number of competing candidate features to choose from, with different properties, contributing towards different objectives. It is also a critical part of SPLs as they have a direct impact on the properties of the product. There have been a number of attempts to automate the selection of features. However, they offer limited flexibility in terms of specifying objectives and quantifying datasets based on these objectives, so they can be used by various selection algorithms. In this research we introduce a novel feature selection approach that supports multiple multi-level user defined objectives. A novel feature quantification method using twenty operators, capable of treating text-based and numeric values and three selection algorithms called Falcon, Jaguar, and Snail are introduced. Falcon and Jaguar are based on greedy algorithm while Snail is a variation of exhaustive search algorithm. With an increase in 4% execution time, Jaguar performed 6% and 8% better than Falcon in terms of added value and the number of features selected.


Sign in / Sign up

Export Citation Format

Share Document