scholarly journals Inter-destination Multimedia Synchronization: A Contemporary Survey

2019 ◽  
pp. 10-21
Author(s):  
Dimitris Kanellopoulos

The advent of social networking applications, media streaming technologies, and synchronous communications has created an evolution towards dynamic shared media experiences. In this new model, geographically distributed groups of users can be immersed in a common virtual networked environment in which they can interact and collaborate in real- time within the context of simultaneous media content consumption. In this environment, intra-stream and inter-stream synchronization techniques are used inside the consumers’ playout devices, while synchronization of media streams across multiple separated locations is required. This synchronization is nown as multipoint, group or Inter-Destination Multimedia Synchronization (IDMS) and is needed in many applications such as social TV and synchronous e-learning. This survey paper discusses intraand inter-stream synchronization issues, but it mainly focuses on the most well-known IDMS techniques that can be used in emerging distributed multimedia applications. In addition, it provides some research directions for future work.

Author(s):  
Yonatan Belinkov ◽  
James Glass

The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Aolin Che ◽  
Yalin Liu ◽  
Hong Xiao ◽  
Hao Wang ◽  
Ke Zhang ◽  
...  

In the past decades, due to the low design cost and easy maintenance, text-based CAPTCHAs have been extensively used in constructing security mechanisms for user authentications. With the recent advances in machine/deep learning in recognizing CAPTCHA images, growing attack methods are presented to break text-based CAPTCHAs. These machine learning/deep learning-based attacks often rely on training models on massive volumes of training data. The poorly constructed CAPTCHA data also leads to low accuracy of attacks. To investigate this issue, we propose a simple, generic, and effective preprocessing approach to filter and enhance the original CAPTCHA data set so as to improve the accuracy of the previous attack methods. In particular, the proposed preprocessing approach consists of a data selector and a data augmentor. The data selector can automatically filter out a training data set with training significance. Meanwhile, the data augmentor uses four different image noises to generate different CAPTCHA images. The well-constructed CAPTCHA data set can better train deep learning models to further improve the accuracy rate. Extensive experiments demonstrate that the accuracy rates of five commonly used attack methods after combining our preprocessing approach are 2.62% to 8.31% higher than those without preprocessing approach. Moreover, we also discuss potential research directions for future work.


Author(s):  
Shubham Dubey ◽  
Biro Piroska ◽  
Manjulata Gautam

The world is changing rapidly, so is academics. E-learning has altered the area of academics and education. ICT enabled learning has given ideal services to students by providing any type of content on demand which is proportional to the performance of students. The concentration of learner has been found instinctive; thus there is a need of engaging mind towards course progress with its entirety till the objectives of the course will be achieved. There are several e-learning platforms available as EdX, Udacity, Khan Academy, Alison those have a number of learners registered for various courses. Studies suggest that these platforms suffer from the common problem of learners’ dropping out. Investigations also claim early leaving rate is increasing due to lack of quality of content, distraction factors, learners’ mind change, outdated and succinct information, and some more detraction factors. These issues have been observed on the basis of early leaving rates in various MOOCs. Thus there is a mammoth scope for minimizing the impact of these reasons on the learners’ mind. It can be achieved by identifying these factors affecting learners’ motivation during the course. This study is aiming on identifying these factors. The approach is to explore some certain keywords on previous literature (total 41) and then calculating their frequencies and co-factors associated with them. Both grouped factors contribution and individual factor contribution have been taken care. The study gives a direction for future work towards overcoming these actor and engaging learners in ICT enabled learning.


2015 ◽  
Vol 154 (1) ◽  
pp. 89-100 ◽  
Author(s):  
Jonathon Hutchinson

The public service media (PSM) remit requires the Australian Broadcasting Corporation (ABC) to provide for minorities while fostering national culture and the public sphere. Social media platforms and projects – specifically ‘social TV’ – have enabled greater participation in ABC content consumption and creation; they provide opportunities for social participation in collaborative cultural production. However it can be argued that, instead of deconstructing boundaries, social media platforms may in fact reconstruct participation barriers within PSM production processes. This article explores ABC co-creation between Twitter and the # 7DaysLater television program, a narrative-based comedy program that engaged its audience through social media to produce its weekly program. The article demonstrates why the ABC should engage with social media platforms to collaboratively produce content, with # 7DaysLater providing an innovative example, but suggests skilled cultural intermediaries with experience in community facilitation should carry out the process.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jianwei Zhang ◽  
Xubin Zhang ◽  
Lei Lv ◽  
Yining Di ◽  
Wei Chen

Background: Learning discriminative representation from large-scale data sets has made a breakthrough in decades. However, it is still a thorny problem to generate representative embedding from limited examples, for example, a class containing only one image. Recently, deep learning-based Few-Shot Learning (FSL) has been proposed. It tackles this problem by leveraging prior knowledge in various ways. Objective: In this work, we review recent advances of FSL from the perspective of high-dimensional representation learning. The results of the analysis can provide insights and directions for future work. Methods: We first present the definition of general FSL. Then we propose a general framework for the FSL problem and give the taxonomy under the framework. We survey two FSL directions: learning policy and meta-learning. Results: We review the advanced applications of FSL, including image classification, object detection, image segmentation and other tasks etc., as well as the corresponding benchmarks to provide an overview of recent progress. Conclusion: FSL needs to be further studied in medical images, language models, and reinforcement learning in future work. In addition, cross-domain FSL, successive FSL, and associated FSL are more challenging and valuable research directions.


Author(s):  
Azrah Anparasan ◽  
Miguel Lejeune

Purpose The purpose of this paper is to propose a novel evidence-based Haddon matrix that identifies intervention options for organizations and governments responding to an epidemic in a developing economy. Design/methodology/approach A literature review of articles published within a year of the cholera outbreak in Haiti. Two separate types of literature sources are used – academic and non-academic – to apprehend the value and role of interventions implemented and/or identified. Findings The Haddon matrix helps break down the challenges involved in the containment of an epidemic into smaller, manageable components. This research shows that the matrix enables visualization of past evidence, help dissect various informational sources, and increase collaboration across humanitarian organizations. It will also serve as a building block for academics to identify new research directions to respond to epidemic outbreaks. Research limitations/implications The analysis focuses on the cholera epidemic in Haiti. Future work will be directed to generalize the identified recommendations and insights to a broader context. Originality/value This paper presents an evidence-based Haddon matrix that infers recommendations and insights based on past evidence for each phase (pre-event, response, and post-event) and factor (agent, host, physical environment, and socio-cultural environment) of an epidemic and for various stakeholders (humanitarian organizations, governments, and academics). The matrix provides a structured framework to identify interventions and best practices to address challenges during an epidemic outbreak.


2014 ◽  
Vol 69 (5) ◽  
Author(s):  
Arda Yunianta ◽  
Norazah Yusof ◽  
Mohd Shahizan Othman ◽  
Abdul Aziz ◽  
Nataniel Dengen ◽  
...  

Distribution and heterogeneity of data is the current issues in data level implementation. Different data representation between applications makes the integration problem increasingly complex. Stored data between applications sometimes have similar meaning, but because of the differences in data representation, the application cannot be integrated with the other applications. Many researchers found that the semantic technology is the best way to resolve the current data integration issues. Semantic technology can handle heterogeneity of data; data with different representations and sources. With semantic technology data mapping can also be done from different database and different data format that have the same meaning data. This paper focuses on the semantic data mapping using semantic ontology approach. In the first level of process, semantic data mapping engine will produce data mapping language with turtle (.ttl) file format that can be used for Local Java Application using Jena Library and Triple Store. In the second level process, D2R Server that can be access from outside environment is provided using HTTP Protocol to access using SPARQL Clients, Linked Data Clients (RDF Formats) and HTML Browser. Future work to will continue on this topic, focusing on E-Learning Usage Index Tool (IPEL) application that is able to integrate with others system applications like Moodle E-Learning Systems. 


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Na Lin ◽  
Changfu Zong ◽  
Masayoshi Tomizuka ◽  
Pan Song ◽  
Zexing Zhang ◽  
...  

Driver characteristics have been the research focus for automotive control. Study on identification of driver characteristics is provided in this paper in terms of its relevant research directions and key technologies involved. This paper discusses the driver characteristics based on driver’s operation behavior, or the driver behavior characteristics. Following the presentation of the fundamental of the driver behavior characteristics, the key technologies of the driver behavior characteristics are reviewed in detail, including classification and identification methods of the driver behavior characteristics, experimental design and data acquisition, and model adaptation. Moreover, this paper discusses applications of the identification of the driver behavior characteristics which has been applied to the intelligent driver advisory system, the driver safety warning system, and the vehicle dynamics control system. At last, some ideas about the future work are concluded.


Sign in / Sign up

Export Citation Format

Share Document