scholarly journals A large-scale real-life crowd steering experiment via arrow-like stimuli

2020 ◽  
Vol 5 ◽  
Author(s):  
Alessandro Corbetta ◽  
Werner Kroneman ◽  
Maurice Donners ◽  
Antal Haans ◽  
Philip Ross ◽  
...  

We introduce “Moving Light”: an unprecedented real-life crowd steering experiment that involved about 140.000 participants among the visitors of the Glow 2017 Light Festival (Eindhoven, NL). Moving Light targets one outstanding question of paramount societal and technological importance: “can we seamlessly and systematically influence routing decisions in pedestrian crowds?” Establishing effective crowd steering methods is extremely relevant in the context of crowd management, e.g. when it comes to keeping floor usage within safety limits (e.g. during public events with high attendance) or at designated comfort levels (e.g. in leisure areas). In the Moving Light setup, visitors walking in a corridor face a choice between two symmetric exits defined by a large central obstacle. Stimuli, such as arrows, alternate at random and perturb the symmetry of the environment to bias choices. While visitors move in the experiment, they are tracked with high space and time resolution, such that the efficiency of each stimulus at steering individual routing decisions can be accurately evaluated a posteriori. In this contribution, we first describe the measurement concept in the Moving Light experiment and then we investigate quantitatively the steering capability of arrow indications.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5073
Author(s):  
Khalil Khan ◽  
Waleed Albattah ◽  
Rehan Ullah Khan ◽  
Ali Mustafa Qamar ◽  
Durre Nayab

Real time crowd analysis represents an active area of research within the computer vision community in general and scene analysis in particular. Over the last 10 years, various methods for crowd management in real time scenario have received immense attention due to large scale applications in people counting, public events management, disaster management, safety monitoring an so on. Although many sophisticated algorithms have been developed to address the task; crowd management in real time conditions is still a challenging problem being completely solved, particularly in wild and unconstrained conditions. In the proposed paper, we present a detailed review of crowd analysis and management, focusing on state-of-the-art methods for both controlled and unconstrained conditions. The paper illustrates both the advantages and disadvantages of state-of-the-art methods. The methods presented comprise the seminal research works on crowd management, and monitoring and then culminating state-of-the-art methods of the newly introduced deep learning methods. Comparison of the previous methods is presented, with a detailed discussion of the direction for future research work. We believe this review article will contribute to various application domains and will also augment the knowledge of the crowd analysis within the research community.



2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.



Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.



Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.



2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Christos Makris ◽  
Georgios Pispirigos

Nowadays, due to the extensive use of information networks in a broad range of fields, e.g., bio-informatics, sociology, digital marketing, computer science, etc., graph theory applications have attracted significant scientific interest. Due to its apparent abstraction, community detection has become one of the most thoroughly studied graph partitioning problems. However, the existing algorithms principally propose iterative solutions of high polynomial order that repetitively require exhaustive analysis. These methods can undoubtedly be considered resource-wise overdemanding, unscalable, and inapplicable in big data graphs, such as today’s social networks. In this article, a novel, near-linear, and highly scalable community prediction methodology is introduced. Specifically, using a distributed, stacking-based model, which is built on plain network topology characteristics of bootstrap sampled subgraphs, the underlined community hierarchy of any given social network is efficiently extracted in spite of its size and density. The effectiveness of the proposed methodology has diligently been examined on numerous real-life social networks and proven superior to various similar approaches in terms of performance, stability, and accuracy.



2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Peter Quax ◽  
Jeroen Dierckx ◽  
Bart Cornelissen ◽  
Wim Lamotte

The explosive growth of the number of applications based on networked virtual environment technology, both games and virtual communities, shows that these types of applications have become commonplace in a short period of time. However, from a research point of view, the inherent weaknesses in their architectures are quickly exposed. The Architecture for Large-Scale Virtual Interactive Communities (ALVICs) was originally developed to serve as a generic framework to deploy networked virtual environment applications on the Internet. While it has been shown to effectively scale to the numbers originally put forward, our findings have shown that, on a real-life network, such as the Internet, several drawbacks will not be overcome in the near future. It is, therefore, that we have recently started with the development of ALVIC-NG, which, while incorporating the findings from our previous research, makes several improvements on the original version, making it suitable for deployment on the Internet as it exists today.



2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Thomas Grinda ◽  
Natacha Joyon ◽  
Amélie Lusque ◽  
Sarah Lefèvre ◽  
Laurent Arnould ◽  
...  

AbstractExpression of hormone receptor (HR) for estrogens (ER) and progesterone (PR) and HER2 remains the cornerstone to define the therapeutic strategy for breast cancer patients. We aimed to compare phenotypic profiles between matched primary and metastatic breast cancer (MBC) in the ESME database, a National real-life multicenter cohort of MBC patients. Patients with results available on both primary tumour and metastatic disease within 6 months of MBC diagnosis and before any tumour progression were eligible for the main analysis. Among the 16,703 patients included in the database, 1677 (10.0%) had available biopsy results at MBC diagnosis and on matched primary tumour. The change rate of either HR or HER2 was 27.0%. Global HR status changed (from positive = either ER or PR positive, to negative = both negative; and reverse) in 14.2% of the cases (expression loss in 72.5% and gain in 27.5%). HER2 status changed in 7.8% (amplification loss in 45.2%). The discordance rate appeared similar across different biopsy sites. Metastasis to bone, HER2+ and RH+/HER2- subtypes and previous adjuvant endocrine therapy, but not relapse interval were associated with an HR discordance in multivariable analysis. Loss of HR status was significantly associated with a risk of death (HR adjusted = 1.51, p = 0.002) while gain of HR and HER2 discordance was not. In conclusion, discordance of HR and HER2 expression between primary and metastatic breast cancer cannot be neglected. In addition, HR loss is associated with worse survival. Sampling metastatic sites is essential for treatment adjustment.



2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Sai Kiranmayee Samudrala ◽  
Jaroslaw Zola ◽  
Srinivas Aluru ◽  
Baskar Ganapathysubramanian

Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.



Author(s):  
Dongbo Xi ◽  
Fuzhen Zhuang ◽  
Yanchi Liu ◽  
Jingjing Gu ◽  
Hui Xiong ◽  
...  

Human mobility data accumulated from Point-of-Interest (POI) check-ins provides great opportunity for user behavior understanding. However, data quality issues (e.g., geolocation information missing, unreal check-ins, data sparsity) in real-life mobility data limit the effectiveness of existing POIoriented studies, e.g., POI recommendation and location prediction, when applied to real applications. To this end, in this paper, we develop a model, named Bi-STDDP, which can integrate bi-directional spatio-temporal dependence and users’ dynamic preferences, to identify the missing POI check-in where a user has visited at a specific time. Specifically, we first utilize bi-directional global spatial and local temporal information of POIs to capture the complex dependence relationships. Then, target temporal pattern in combination with user and POI information are fed into a multi-layer network to capture users’ dynamic preferences. Moreover, the dynamic preferences are transformed into the same space as the dependence relationships to form the final model. Finally, the proposed model is evaluated on three large-scale real-world datasets and the results demonstrate significant improvements of our model compared with state-of-the-art methods. Also, it is worth noting that the proposed model can be naturally extended to address POI recommendation and location prediction tasks with competitive performances.



2014 ◽  
Vol 6 (2) ◽  
pp. 23-36
Author(s):  
Fatma Molu

Complex financial conversion projects with large budgets have many different challenges. For companies that want to survive in conditions of tough competition, legacy (old) systems must continue to provide the required service throughout the project life cycle and in some circumstances even after project completion partly. In this case, the term coexistence comes into prominence. During this period, testing phase takes more critical role while integration systems' complexity and risk amount increase. Determining testing approach to use is essential to make sure both transformed and legacy systems provide service synchronously. In this paper, testing practices applied in the long conversion processes are discussed. Primarily, the basic features of the critical financial systems are addressed and then the main adoption methods in the literature are summarized. Then a variety of testing methodologies are presented depending on those adoption methods. These samples based on real-life experiences of transformation project. The most extensive example of real-time online financial systems is core banking systems. This paper covers the testing life cycle process of the large scale project of core banking system transformation project of a bank in Turkey.



Sign in / Sign up

Export Citation Format

Share Document