scholarly journals Analysis of Operations upon Entry into Intermodal Freight Terminals

2019 ◽  
Vol 9 (12) ◽  
pp. 2558 ◽  
Author(s):  
Mariusz Kostrzewski ◽  
Arkadiusz Kostrzewski

The design of intermodal freight terminals requires extensive research and a thorough analysis of the technical, financial and organizational aspects. In the paper, the operation of the reposition of large cargo containers (one of the types of intermodal transport units, ITUs) on the dedicated places is subjected to a discussion. The analysis is carried out with the use of a vehicle equipped with a telescopic arm, such as a reach stacker. The considered storage facility is reduced to a block characterized by spatial accumulation given in the paper. The description of the procedure for the execution of the handling operation from the arrival of a tractor-trailer with a container into a terminal, followed by the ITUs being set aside in a dedicated place and, in the end, the departure of the truck without load, is given in the paper. The activities are described in detail in order to present a descriptive model of particular operations upon entry to the intermodal freight terminal. Moreover, the paper contains relevant figures illustrating the various steps of realization and the analysis of duration of activities supported by actual realizations. The durations of the individual activities described in the paper are experimental, and the results have been validated on real-world intermodal freight terminals. Therefore, the authors believe that the obtained values may be used in analytical, simulation and numerical models of intermodal freight terminals.

Author(s):  
Joshua Simmons ◽  
Kristen Splinter

Physics-based numerical models play an important role in the estimation of storm erosion, particularly at beaches for which there is little historical data. However, the increasing availability of pre-and post-storm data for multiple events and at a number of beaches around the world has opened the possibility of using data-driven approaches for erosion prediction. Both physics-based and purely data-driven approaches have inherent strengths and weaknesses in their ability to predict storm-induced erosion. It is vital that coastal managers and modelers are aware of these trade-offs as well as methods to maximise the value from each modelling approach in an increasingly data-rich environment. In this study, data from approximately 40 years of coastal monitoring at Narrabeen-Collaroy Beach (SE Australia)has been used to evaluate the individual performance of the numerical erosion models SBEACH and XBeach, and a data-driven modelling technique. The models are then combined using a simple weighting technique to provide a hybrid estimate of erosion.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/v53dZiO8Y60


Author(s):  
Hao Zhang ◽  
Liangxiao Jiang ◽  
Wenqiang Xu

Crowdsourcing services provide a fast, efficient, and cost-effective means of obtaining large labeled data for supervised learning. Ground truth inference, also called label integration, designs proper aggregation strategies to infer the unknown true label of each instance from the multiple noisy label set provided by ordinary crowd workers. However, to the best of our knowledge, nearly all existing label integration methods focus solely on the multiple noisy label set itself of the individual instance while totally ignoring the intercorrelation among multiple noisy label sets of different instances. To solve this problem, a multiple noisy label distribution propagation (MNLDP) method is proposed in this study. MNLDP first transforms the multiple noisy label set of each instance into its multiple noisy label distribution and then propagates its multiple noisy label distribution to its nearest neighbors. Consequently, each instance absorbs a fraction of the multiple noisy label distributions from its nearest neighbors and yet simultaneously maintains a fraction of its own original multiple noisy label distribution. Promising experimental results on simulated and real-world datasets validate the effectiveness of our proposed method.


2021 ◽  
Author(s):  
Gert-Jan Steeneveld ◽  
Roosmarijn Knol

<p>Fog is a critical weather phenomenon for safety and operations in aviation. Unfortunately, the forecasting of radiation fog remains challenging due to the numerous physical processes that play a role and their complex interactions, in addition to the vertical and horizontal resolution of the numerical models. In this study we evaluate the performance of the Weather Research and Forecasting (WRF) model for a radiation fog event at Schiphol Amsterdam Airport (The Netherlands) and further develop the model towards a 100 m grid spacing. Hence we introduce high resolution land use and land elevation data. In addition we study the role of gravitational droplet settling, advection of TKE, top-down diffusion caused by strong radiative cooling at the fog top. Finally the impact of heat released by the terminal areas on the fog formation is studied. The model outcomes are evaluated against 1-min weather observations near multiple runways at the airport.</p><p>Overall we find the WRF model shows an reasonable timing of the fog onset and is well able to reproduce the visibility and meteorological conditions as observed during the case study. The model appears to be relatively insensitive to the activation of the individual physical processes. An increased spatial resolution to 100 m generally results in a better timing of the fog onset differences up to three hours, though not for all runways. The effect of the refined landuse dominates over the effect of refined elevation data. The modelled fog dissipation systematically occurs 3-4 h hours too early, regardless of physical processes or spatial resolution. Finally, the introduction of heat from terminal buildings delays the fog onset with a maximum of two hours, an overestimated visibility of 100-200 m and a decrease of the LWC with 0.10-0.15 g/kg compared to the reference.</p>


Author(s):  
Rahul Patel ◽  
Matthias Spitzmuller

In the real world, employees may be presented with difficult tasks that could be tackled in multiple ways and with available resources. On top of this, with deadlines, few external resources, and other tasks that employees typically face, thinking tends to be narrowed and so do the actions that follow. This could lead to a persistent course of action that leads to failure. We call this situation escalation of commitment. When our coworkers offer help and we are stuck and have invested time and effort into near-impossible tasks, is it worth accepting this offer of help? Or, would we rather risk more time and resources and instead persist in solving this near impossible problem? In the latter option, the individual may experience burnout and stress. For the organization, deadlines would not be met, and objectives could not be accomplished. My research looks at these helping behaviours and whether they lead others astray in an escalation of commitment. Specifically, I predict that individuals who have invested in a failing course of action are less likely to abandon this path when they receive help from others. This intersection of escalation and helping behaviours are important because when employees attempt to help a coworker who is invested in an extremely difficult task, they may be doing more harm than good.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Milos Kudelka ◽  
Eliska Ochodkova ◽  
Sarka Zehnalova ◽  
Jakub Plesnik

Abstract The existence of groups of nodes with common characteristics and the relationships between these groups are important factors influencing the structures of social, technological, biological, and other networks. Uncovering such groups and the relationships between them is, therefore, necessary for understanding these structures. Groups can either be found by detection algorithms based solely on structural analysis or identified on the basis of more in-depth knowledge of the processes taking place in networks. In the first case, these are mainly algorithms detecting non-overlapping communities or communities with small overlaps. The latter case is about identifying ground-truth communities, also on the basis of characteristics other than only network structure. Recent research into ground-truth communities shows that in real-world networks, there are nested communities or communities with large and dense overlaps which we are not yet able to detect satisfactorily only on the basis of structural network properties.In our approach, we present a new perspective on the problem of group detection using only the structural properties of networks. Its main contribution is pointing out the existence of large and dense overlaps of detected groups. We use the non-symmetric structural similarity between pairs of nodes, which we refer to as dependency, to detect groups that we call zones. Unlike other approaches, we are able, thanks to non-symmetry, accurately to describe the prominent nodes in the zones which are responsible for large zone overlaps and the reasons why overlaps occur. The individual zones that are detected provide new information associated in particular with the non-symmetric relationships within the group and the roles that individual nodes play in the zone. From the perspective of global network structure, because of the non-symmetric node-to-node relationships, we explore new properties of real-world networks that describe the differences between various types of networks.


2020 ◽  
Vol 34 (04) ◽  
pp. 6837-6844
Author(s):  
Xiaojin Zhang ◽  
Honglei Zhuang ◽  
Shengyu Zhang ◽  
Yuan Zhou

We study a variant of the thresholding bandit problem (TBP) in the context of outlier detection, where the objective is to identify the outliers whose rewards are above a threshold. Distinct from the traditional TBP, the threshold is defined as a function of the rewards of all the arms, which is motivated by the criterion for identifying outliers. The learner needs to explore the rewards of the arms as well as the threshold. We refer to this problem as "double exploration for outlier detection". We construct an adaptively updated confidence interval for the threshold, based on the estimated value of the threshold in the previous rounds. Furthermore, by automatically trading off exploring the individual arms and exploring the outlier threshold, we provide an efficient algorithm in terms of the sample complexity. Experimental results on both synthetic datasets and real-world datasets demonstrate the efficiency of our algorithm.


2020 ◽  
Vol 7 (10) ◽  
pp. 200766
Author(s):  
Bryony McKean ◽  
Jonathan C. Flavell ◽  
Harriet Over ◽  
Steven P. Tipper

Perceptual fluency and response inhibition are well-established techniques to unobtrusively manipulate preference: objects are devalued following association with disfluency or inhibition. These approaches to preference change are extensively studied individually, but there is less research examining the impact of combining the two techniques in a single intervention. In short (3 min) game-like tasks, we examine the preference and memory effects of perceptual fluency and inhibition individually, and then the cumulative effects of combining the two techniques. The first experiment confirmed that perceptual fluency and inhibition techniques influence immediate preference judgements but, somewhat surprisingly, combining these techniques did not lead to greater effects than either technique alone. The second experiment replicated the first but with changes to much more closely imitate a real-world application: measuring preference after 20 min of unrelated intervening tasks, modifying the retrieval context via room change, and generalization from computer images of objects to real-world versions of those objects. Here, the individual effects of perceptual fluency and inhibition were no longer detected, whereas combining these techniques resulted in preference change. These results demonstrate the potential of short video games as a means of influencing behaviour, such as food choices to improve health and well-being.


2001 ◽  
Vol 21 (4) ◽  
pp. 399-413 ◽  
Author(s):  
Bart W. Wiegmans ◽  
Peter Nijkamp ◽  
Enno Masurel

2018 ◽  
Vol 37 (9/10) ◽  
pp. 711-720 ◽  
Author(s):  
Naghi Radi Afsouran ◽  
Morteza Charkhabi ◽  
Seyed Ali Siadat ◽  
Reza Hoveida ◽  
Hamid Reza Oreyzi ◽  
...  

Purpose The purpose of this paper is to introduce case-method teaching (CMT), its advantages and disadvantages for the process of organizational training within organizations, as well as to compare its advantages and disadvantages with current training methods. Design/methodology/approach The authors applied a systematic literature review to define, identify and compare CMT with current methods. Findings In CMT, participants get involved with real-world challenges from an action perspective instead of analyzing them from a distance. Also, different reactions of the participants to the same challenge aid instructors to identify the individual differences of participants toward the challenge. Although CMT is still not considered as a popular organizational training method, the advantages of CMT may encourage organizational instructors to further apply it. Improving the long-term memory, enhancing the quality of decision making and understanding the individual differences of individuals are the advantages of CMT. Research limitations/implications A lack of sufficient empirical researchers and the high cost of conducting this method may prevent practitioners to apply it. Originality/value The review suggested that CMT is able to bring dilemmas from the real world into training settings. Also, it helps organizations to identify the individual reactions before they make a decision.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
John Kennedy ◽  
Lara Flanagan ◽  
Luke Dowling ◽  
G. J. Bennett ◽  
Henry Rice ◽  
...  

Advancements in 3D print technology now allow the printing of structured acoustic absorbent materials at the appropriate microscopic scale and sample sizes. The repeatability of the fundamental cell unit of these metamaterials provides a pathway for the development of viable macro models to simulate built-up structures based on detailed models of the individual cell units; however, verification of such models on actual manufactured structures presents a challenge. In this paper, a design concept for an acoustic benchmark metamaterial consisting of an interlinked network of resonant chambers is considered. The form chosen is periodic with cubes incorporating spherical internal cavities connected through cylindrical openings on each face of the cube. This design is amenable to both numerical modelling and manufacture through additive techniques whilst yielding interesting acoustic behaviour. The paper reports on the design, manufacture, modelling, and experimental validation of these benchmark structures. The behaviour of the acoustic metamaterial manufactured through three different polymer-based printing technologies is investigated with reference to the numerical models and a metal powder-based print technology. At the scale of this microstructure, it can be seen that deviations in surface roughness and dimensional fidelity have a comparable impact on the experimentally measured values of the absorption coefficient.


Sign in / Sign up

Export Citation Format

Share Document