scholarly journals Traceable Air Baggage Handling System Based on RFID Tags in the Airport

2008 ◽  
Vol 3 (1) ◽  
pp. 106-115 ◽  
Author(s):  
Ting Zhang ◽  
Yuanxin Ouyang ◽  
Yang He

The RFID is not only a feasible, novel, and cost-effective candidate for daily object identification but it is also considered as a significant tool to provide traceable visibility along different stages of the aviation supply chain. In the air baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance so as to improve the management efficiency and the users’ satisfaction. We surveyed current related work and introduce the IATA RP1740c protocol used for the standard to recognize the baggage tags. One distributed aviation baggage traceable application is designed based on the RFID networks. We describe the RFID-based baggage tracking experiment in the BCIA (Beijing Capital International Airport). In this experiment the tags are sealed in the printed baggage label and the RFID readers are fixed in the certain interested positions of the BHS in the Terminal 2. We measure the accurate recognition rate and monitor the baggage’s real-time situation on the monitor’s screen. Through the analysis of the measured results within two months we emphasize the advantage of the adoption of RFID tags in this high noisy BHS environment. The economical benefits achieved by the extensive deployment of RFID in the baggage handing system are also outlined.

Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6104
Author(s):  
Bernardo Calabrese ◽  
Ramiro Velázquez ◽  
Carolina Del-Valle-Soto ◽  
Roberto de Fazio ◽  
Nicola Ivan Giannoccaro ◽  
...  

This paper introduces a novel low-cost solar-powered wearable assistive technology (AT) device, whose aim is to provide continuous, real-time object recognition to ease the finding of the objects for visually impaired (VI) people in daily life. The system consists of three major components: a miniature low-cost camera, a system on module (SoM) computing unit, and an ultrasonic sensor. The first is worn on the user’s eyeglasses and acquires real-time video of the nearby space. The second is worn as a belt and runs deep learning-based methods and spatial algorithms which process the video coming from the camera performing objects’ detection and recognition. The third assists on positioning the objects found in the surrounding space. The developed device provides audible descriptive sentences as feedback to the user involving the objects recognized and their position referenced to the user gaze. After a proper power consumption analysis, a wearable solar harvesting system, integrated with the developed AT device, has been designed and tested to extend the energy autonomy in the different operating modes and scenarios. Experimental results obtained with the developed low-cost AT device have demonstrated an accurate and reliable real-time object identification with an 86% correct recognition rate and 215 ms average time interval (in case of high-speed SoM operating mode) for the image processing. The proposed system is capable of recognizing the 91 objects offered by the Microsoft Common Objects in Context (COCO) dataset plus several custom objects and human faces. In addition, a simple and scalable methodology for using image datasets and training of Convolutional Neural Networks (CNNs) is introduced to add objects to the system and increase its repertory. It is also demonstrated that comprehensive trainings involving 100 images per targeted object achieve 89% recognition rates, while fast trainings with only 12 images achieve acceptable recognition rates of 55%.


With online shopping and many logistic companies on the rise, a single accident can incur heavy loss to the supply chain department and not only disrupts the flow of the supply chain, but also causes injury to life and damage to property. These accidents occur primarily due to driving while feeling distracted or drowsy and it is paramount to monitor such behavior to avoid drastic outcomes in case of driving heavy duty vehicles. Therefore, it is natural for logistic companies to invest in securing their goods and ensuring that there is safe transportation of goods. The objective of our paper is to provide a novel solution to handle the aforementioned problems by monitoring the driver’s performance by analysing the facial features of the driver in real-time while storing the event-triggered data in the cloud and using the cloud services to send mobile alerts when the driver is drowsy or distracted via a mobile application in a cost effective and in an efficient manner.


Author(s):  
Muataz Hazza F. Al Hazza ◽  
Any Nabila Abu Bakar ◽  
Erry Yulian T. Adesta ◽  
Assem Hatem Taha
Keyword(s):  

Author(s):  
Paul Oehlmann ◽  
Paul Osswald ◽  
Juan Camilo Blanco ◽  
Martin Friedrich ◽  
Dominik Rietzel ◽  
...  

AbstractWith industries pushing towards digitalized production, adaption to expectations and increasing requirements for modern applications, has brought additive manufacturing (AM) to the forefront of Industry 4.0. In fact, AM is a main accelerator for digital production with its possibilities in structural design, such as topology optimization, production flexibility, customization, product development, to name a few. Fused Filament Fabrication (FFF) is a widespread and practical tool for rapid prototyping that also demonstrates the importance of AM technologies through its accessibility to the general public by creating cost effective desktop solutions. An increasing integration of systems in an intelligent production environment also enables the generation of large-scale data to be used for process monitoring and process control. Deep learning as a form of artificial intelligence (AI) and more specifically, a method of machine learning (ML) is ideal for handling big data. This study uses a trained artificial neural network (ANN) model as a digital shadow to predict the force within the nozzle of an FFF printer using filament speed and nozzle temperatures as input data. After the ANN model was tested using data from a theoretical model it was implemented to predict the behavior using real-time printer data. For this purpose, an FFF printer was equipped with sensors that collect real time printer data during the printing process. The ANN model reflected the kinematics of melting and flow predicted by models currently available for various speeds of printing. The model allows for a deeper understanding of the influencing process parameters which ultimately results in the determination of the optimum combination of process speed and print quality.


2021 ◽  
Vol 13 (3) ◽  
pp. 1081
Author(s):  
Yoon Kyung Lee

Technologies that are ready-to-use and adaptable in real time to customers’ individual needs are influencing the supply chain of the future. This study proposes a supply chain framework for an innovative and sustainable real-time fashion system (RTFS) between enterprises, designers, and consumers in 3D clothing production systems, using information communication technology, artificial intelligence (AI), and virtual environments. In particular, the RTFS is targeted at customers actively involved in product purchasing, personalising, co-designing, and manufacturing planning. The fashion industry is oriented towards 3D services as a service model, owing to the automation and democratisation of product customisation and personalisation processes. Furthermore, AI offers referral services to prosumers or/and customers and companies, and proposes individual designs with perfect styles and measurements using new 3D computer aided design and AI-based product design technologies for fashion and design companies and customers. Consequently, 3D fashion products in the RTFS supply chain are entirely digital, saving time and money with sampling and tracking capabilities, secured, and trusted with personalised service delivery.


Sign in / Sign up

Export Citation Format

Share Document