A Survey of P2P Data-Driven Live Streaming Systems

Author(s):  
Fabio Pianese

Data-driven peer-to-peer live streaming systems challenge and extend the traditional concept of overlay for application-layer multicast data distribution. In such systems, software nodes propagate individually-named, ordered segments of the stream (called chunks) by independently conducting exchanges with their neighboring peers. Chunk exchanges are solely based on information that is available locally, such as the status of a node’s receive buffer and an approximate knowledge of the buffer contents of its neighbors. In this Chapter, we motivate and retrace the emergence of P2P data-driven live streaming systems, describe their internal data structures and fundamental mechanisms, and provide references to a number of known analytical bounds on the rate and delay that can be achieved using many relevant chunk distribution strategies. We then conclude this survey by reviewing the deployment status of the most popular commercial systems, the results from large-scale Internet measurement studies, and the open research problems.

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Laizhong Cui ◽  
Nan Lu ◽  
Fu Chen

Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate thatpacket delay,continuity index,andcoding ratioof our system can be significantly improved, especially in dynamic environments.


1980 ◽  
Vol 46 (1) ◽  
pp. 63-85 ◽  
Author(s):  
Fred Heilizer

It is the contention of this paper that personality psychology and social psychology have developed different orientations to theory. Pronouncements of crisis emanating from each area are presumed to reflect these divergent developments. The orientation of social psychology, by means of situationism in social learning theory, is toward data-driven, empirical constructs and theories with a major cognitive content. The data-driven, empirical nature of constructs and theories in situationism emphasizes the primacy of the data in developing the constructs and of asking limited, focussed questions. The orientation of personality psychology, by means of person-situation interactionism, is towards the more traditional concept-driven constructs and theories which emphasize the importance of extensive conceptual systems and broad semantic descriptions. These two orientations are seen as representing Kuhnian paradigms—herein called psychodigms—of different degrees of development. Situationism has developed from and within the Lewinian tradition and has achieved the status of a fully developed psychodigm for social psychology. Interactionism has developed more recently as a result of attacks by situationists on the psychoanalytically relevant constructs of motivation and trait and functions to conserve these constructs as concept-driven and as part of the person in the interaction. The newness of interactionism as the major orientation for personality psychology has produced, at most, a partially developed psychodigm. It is expected that the energizing and conformity-producing effect of a fully developed psychodigm is overwhelming as compared to the undetermined, and incompletely formed, power of a partially developed psychodigm. Judgments about the state of theory in, and future of, personality and social psychology may require consideration of the divergent psychodigms of theory.


2021 ◽  
Vol 13 (6) ◽  
pp. 3571
Author(s):  
Bogusz Wiśnicki ◽  
Dorota Dybkowska-Stefek ◽  
Justyna Relisko-Rybak ◽  
Łukasz Kolanda

The paper responds to research problems related to the implementation of large-scale investment projects in waterways in Europe. As part of design and construction works, it is necessary to indicate river ports that play a major role within the European transport network as intermodal nodes. This entails a number of challenges, the cardinal one being the optimal selection of port locations, taking into account the new transport, economic, and geopolitical situation that will be brought about by modernized waterways. The aim of the paper was to present an original methodology for determining port locations for modernized waterways based on non-cost criteria, as an extended multicriteria decision-making method (MCDM) and employing GIS (Geographic Information System)-based tools for spatial analysis. The methodology was designed to be applicable to the varying conditions of a river’s hydroengineering structures (free-flowing river, canalized river, and canals) and adjustable to the requirements posed by intermodal supply chains. The method was applied to study the Odra River Waterway, which allowed the formulation of recommendations regarding the application of the method in the case of different river sections at every stage of the research process.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Simon Elias Bibri

AbstractA new era is presently unfolding wherein both smart urbanism and sustainable urbanism processes and practices are becoming highly responsive to a form of data-driven urbanism under what has to be identified as data-driven smart sustainable urbanism. This flourishing field of research is profoundly interdisciplinary and transdisciplinary in nature. It operates out of the understanding that advances in knowledge necessitate pursuing multifaceted questions that can only be resolved from the vantage point of interdisciplinarity and transdisciplinarity. This implies that the research problems within the field of data-driven smart sustainable urbanism are inherently too complex and dynamic to be addressed by single disciplines. As this field is not a specific direction of research, it does not have a unitary disciplinary framework in terms of a uniform set of the academic and scientific disciplines from which the underlying theories can be drawn. These theories constitute a unified foundation for the practice of data-driven smart sustainable urbanism. Therefore, it is of significant importance to develop an interdisciplinary and transdisciplinary framework. With that in regard, this paper identifies, describes, discusses, evaluates, and thematically organizes the core academic and scientific disciplines underlying the field of data-driven smart sustainable urbanism. This work provides an important lens through which to understand the set of established and emerging disciplines that have high integration, fusion, and application potential for informing the processes and practices of data-driven smart sustainable urbanism. As such, it provides fertile insights into the core foundational principles of data-driven smart sustainable urbanism as an applied domain in terms of its scientific, technological, and computational strands. The novelty of the proposed framework lies in its original contribution to the body of foundational knowledge of an emerging field of urban planning and development.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Author(s):  
Ekaterina Kochmar ◽  
Dung Do Vu ◽  
Robert Belfer ◽  
Varun Gupta ◽  
Iulian Vlad Serban ◽  
...  

AbstractIntelligent tutoring systems (ITS) have been shown to be highly effective at promoting learning as compared to other computer-based instructional approaches. However, many ITS rely heavily on expert design and hand-crafted rules. This makes them difficult to build and transfer across domains and limits their potential efficacy. In this paper, we investigate how feedback in a large-scale ITS can be automatically generated in a data-driven way, and more specifically how personalization of feedback can lead to improvements in student performance outcomes. First, in this paper we propose a machine learning approach to generate personalized feedback in an automated way, which takes individual needs of students into account, while alleviating the need of expert intervention and design of hand-crafted rules. We leverage state-of-the-art machine learning and natural language processing techniques to provide students with personalized feedback using hints and Wikipedia-based explanations. Second, we demonstrate that personalized feedback leads to improved success rates at solving exercises in practice: our personalized feedback model is used in , a large-scale dialogue-based ITS with around 20,000 students launched in 2019. We present the results of experiments with students and show that the automated, data-driven, personalized feedback leads to a significant overall improvement of 22.95% in student performance outcomes and substantial improvements in the subjective evaluation of the feedback.


2021 ◽  
Vol 10 (1) ◽  
pp. e001087
Author(s):  
Tarek F Radwan ◽  
Yvette Agyako ◽  
Alireza Ettefaghian ◽  
Tahira Kamran ◽  
Omar Din ◽  
...  

A quality improvement (QI) scheme was launched in 2017, covering a large group of 25 general practices working with a deprived registered population. The aim was to improve the measurable quality of care in a population where type 2 diabetes (T2D) care had previously proved challenging. A complex set of QI interventions were co-designed by a team of primary care clinicians and educationalists and managers. These interventions included organisation-wide goal setting, using a data-driven approach, ensuring staff engagement, implementing an educational programme for pharmacists, facilitating web-based QI learning at-scale and using methods which ensured sustainability. This programme was used to optimise the management of T2D through improving the eight care processes and three treatment targets which form part of the annual national diabetes audit for patients with T2D. With the implemented improvement interventions, there was significant improvement in all care processes and all treatment targets for patients with diabetes. Achievement of all the eight care processes improved by 46.0% (p<0.001) while achievement of all three treatment targets improved by 13.5% (p<0.001). The QI programme provides an example of a data-driven large-scale multicomponent intervention delivered in primary care in ethnically diverse and socially deprived areas.


1988 ◽  
Vol 32 (17) ◽  
pp. 1179-1182 ◽  
Author(s):  
P. Jay Merkle ◽  
Douglas B. Beaudet ◽  
Robert C. Williges ◽  
David W. Herlong ◽  
Beverly H. Williges

This paper describes a systematic methodology for selecting independent variables to be considered in large-scale research problems. Five specific procedures including brainstorming, prototype interface representation, feasibility/relevance analyses, structured literature reviews, and user subjective ratings are evaluated and incorporated into an integrated strategy. This methodology is demonstrated in the context of designing the user interface for a telephone-based information inquiry system. The procedure was successful in reducing an initial set of 95 independent variables to a subset of 19 factors that warrant subsequent detailed analysis. These results are discussed in terms of a comprehensive sequential research methodology useful for investigating human factors problems.


Sign in / Sign up

Export Citation Format

Share Document