Battery-free Internet-of-Things devices equipped with energy harvesting hold the promise of extended operational lifetime, reduced maintenance costs, and lower environmental impact. Despite this clear potential, it remains complex to develop applications that deliver sustainable operation in the face of variable energy availability and dynamic energy demands. This article aims to reduce this complexity by introducing AsTAR, an energy-aware task scheduler that automatically adapts task execution rates to match available environmental energy. AsTAR enables the developer to prioritize tasks based upon their importance, energy consumption, or a weighted combination thereof. In contrast to prior approaches, AsTAR is autonomous and self-adaptive, requiring no
modeling of the environment or hardware platforms. We evaluate AsTAR based on its capability to efficiently deliver sustainable operation for multiple tasks on heterogeneous platforms under dynamic environmental conditions. Our evaluation shows that (1) comparing to conventional approaches, AsTAR guarantees
by maintaining a user-defined optimum level of charge, and (2) AsTAR reacts quickly to environmental and platform changes, and achieves
by allocating all the surplus resources following the developer-specified task priorities. (3) Last, the benefits of AsTAR are achieved with minimal performance overhead in terms of memory, computation, and energy.
Realizing a large-scale quantum computer requires hardware platforms that can simultaneously achieve universality, scalability, and fault tolerance. As a viable pathway to meeting these requirements, quantum computation based on continuous-variable optical systems has recently gained more attention due to its unique advantages and approaches. This review introduces several topics of recent experimental and theoretical progress in the optical continuous-variable quantum computation that we believe are promising. In particular, we focus on scaling-up technologies enabled by time multiplexing, bandwidth broadening, and integrated optics, as well as hardware-efﬁcient and robust bosonic quantum error correction schemes.
This research focuses on a solution to assist elderly and limited-mobility people. It aims to improve the autonomy, and, consequently, the quality of life of this target audience by automating daily tasks conducted at home, such as turning on the lights and manipulating electronic devices. However, it is important to consider the costs and quality attributes (e.g., usability) related to the design of solutions to automate a specific environment, that may include hardware platforms and physical adaptations. In this context, the authors present in this chapter the software requirements discovery and elicitation of a home automation app considering the real needs of the elderly and limited-mobility people. Additionally, we conduct the requirements specification using the unified modeling language (UML) to improve completeness, along with graphical user interface (GUI) prototypes. Finally, we present a mobile app prototype using the Android and Arduino platforms to illustrate a use scenario of the solution.
Thermal imaging has many applications that all leverage from the heat map that can be constructed using this type of imaging. It can be used in Internet of Things (IoT) applications to detect the features of surroundings. In such a case, Deep Neural Networks (DNNs) can be used to carry out many visual analysis tasks which can provide the system with the capacity to make decisions. However, due to their huge computational cost, such networks are recommended to exploit custom hardware platforms to accelerate their inference as well as reduce the overall energy consumption of the system. In this work, an energy adaptive system is proposed, which can intelligently configure itself based on the battery energy level. Besides achieving a maximum speed increase that equals 6.38X, the proposed system achieves significant energy that is reduced by 97.81% compared to a conventional general-purpose CPU.
In this paper, we tackle the problem of deploying face recognition (FR) solutions in heterogeneous Internet of Things (IoT) platforms. The main challenges are the optimal deployment of deep neural networks (DNNs) in the high variety of IoT devices (e.g., robots, tablets, smartphones, etc.), the secure management of biometric data while respecting the users’ privacy, and the design of appropriate user interaction with facial verification mechanisms for all kinds of users. We analyze different approaches to solving all these challenges and propose a knowledge-driven methodology for the automated deployment of DNN-based FR solutions in IoT devices, with the secure management of biometric data, and real-time feedback for improved interaction. We provide some practical examples and experimental results with state-of-the-art DNNs for FR in Intel’s and NVIDIA’s hardware platforms as IoT devices.
With the advancement of computer performance, deep learning is playing a vital role on hardware platforms. Indoor scene segmentation is a challenging deep learning task because indoor objects tend to obscure each other, and the dense layout increases the difficulty of segmentation. Still, current networks pursue accuracy improvement, sacrifice speed, and augment memory resource usage. To solve this problem, achieve a compromise between accuracy, speed, and model size. This paper proposes Multichannel Fusion Network (MFNet) for indoor scene segmentation, which mainly consists of Dense Residual Module(DRM) and Multi-scale Feature Extraction Module(MFEM). MFEM uses depthwise separable convolution to cut the number of parameters, matches different sizes of convolution kernels and dilation rates to achieve optimal receptive field; DRM fuses feature maps at several levels of resolution to optimize segmentation details. Experimental results on the NYU V2 dataset show that the proposed method achieves very competitive results compared with other advanced algorithms, with a segmentation speed of 38.47 fps, nearly twice that of Deeplab v3+, but only 1/5 of the number of parameters of Deeplab v3 + . Its segmentation results were close to those of advanced segmentation networks, making it beneficial for the real-time processing of images.
Due to COVID-19 pandemic, there is an increasing demand for mobile robots to substitute human in disinfection tasks. New generations of disinfection robots could be developed to navigate in high-risk, high-touch areas. Public spaces, such as airports, schools, malls, hospitals, workplaces and factories could benefit from robotic disinfection in terms of task accuracy, cost, and execution time. The aim of this work is to integrate and analyse the performance of Particle Swarm Optimization (PSO) algorithm, as global path planner, coupled with Dynamic Window Approach (DWA) for reactive collision avoidance using a ROS-based software prototyping tool. This paper introduces our solution – a SLAM (Simultaneous Localization and Mapping) and optimal path planning-based approach for performing autonomous indoor disinfection work. This ROS-based solution could be easily transferred to different hardware platforms to substitute human to conduct disinfection work in different real contaminated environments.
Digital twins offer a unique opportunity to design, test, deploy, monitor, and control real-world robotic processes. In this paper we present a novel, modular digital twinning framework developed for the investigation of safety within collaborative robotic manufacturing processes. The modular architecture supports scalable representations of user-defined cyber-physical environments, and tools for safety analysis and control. This versatile research tool facilitates the creation of mixed environments of Digital Models, Digital Shadows, and Digital Twins, whilst standardising communication and physical system representation across different hardware platforms. The framework is demonstrated as applied to an industrial case-study focused on the safety assurance of a collaborative robotic manufacturing process. We describe the creation of a digital twin scenario, consisting of individual digital twins of entities in the manufacturing case study, and the application of a synthesised safety controller from our wider work. We show how the framework is able to provide adequate evidence to virtually assess safety claims made against the safety controller using a supporting validation module and testing strategy. The implementation, evidence and safety investigation is presented and discussed, raising exciting possibilities for the use of digital twins in robotic safety assurance.
In recent years, Keyword Spotting (KWS) has become a crucial human–machine interface for mobile devices, allowing users to interact more naturally with their gadgets by leveraging their own voice. Due to privacy, latency and energy requirements, the execution of KWS tasks on the embedded device itself instead of in the cloud, has attracted significant attention from the research community. However, the constraints associated with embedded systems, including limited energy, memory, and computational capacity, represent a real challenge for the embedded deployment of such interfaces. In this article, we explore and guide the reader through the design of KWS systems. To support this overview, we extensively survey the different approaches taken by the recent state-of-the-art (SotA) at the algorithmic, architectural, and circuit level to enable KWS tasks in edge, devices. A quantitative and qualitative comparison between relevant SotA hardware platforms is carried out, highlighting the current design trends, as well as pointing out future research directions in the development of this technology.
Quantization of neural networks has been one of the most popular techniques to compress models for embedded (IoT) hardware platforms with highly constrained latency, storage, memory-bandwidth, and energy specifications. Limiting the number of bits per weight and activation has been the main focus in the literature. To avoid major degradation of accuracy, common quantization methods introduce additional scale factors to adapt the quantized values to the diverse data ranges, present in full-precision (floating-point) neural networks. These scales are usually kept in high precision, requiring the target compute engine to support a few high-precision multiplications, which is not desirable due to the larger hardware cost. Little effort has yet been invested in trying to avoid high-precision multipliers altogether, especially in combination with 4 bit weights. This work proposes a new quantization scheme, based on power-of-two quantization scales, that works on-par compared to uniform per-channel quantization with full-precision 32 bit quantization scales when using only 4 bit weights. This is done through the addition of a low-precision lookup-table that translates stored 4 bit weights into nonuniformly distributed 8 bit weights for internal computation. All our quantized ImageNet CNNs achieved or even exceeded the Top-1 accuracy of their full-precision counterparts, with ResNet18 exceeding its full-precision model by 0.35%. Our MobileNetV2 model achieved state-of-the-art performance with only a slight drop in accuracy of 0.51%.