content type
Recently Published Documents


TOTAL DOCUMENTS

192484
(FIVE YEARS 82394)

H-INDEX

267
(FIVE YEARS 43)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-22
Author(s):  
Juanan Pereira ◽  
Óscar Díaz

Capstone projects usually represent the most significant academic endeavor with which students have been involved. Time management tends to be one of the hurdles. On top, University students are prone to procrastinatory behavior. Inexperience and procrastination team up for students failing to meet deadlines. Supervisors strive to help. Yet heavy workloads frequently prevent tutors from continuous involvement. This article looks into the extent to which conversational agents (a.k.a. chatbots) can tackle procrastination in single-student capstone projects. Specifically, chatbot enablers put in play include (1) alerts, (2) advice, (3) automatic rescheduling, (4) motivational messages, and (5) reference to previous capstone projects. Informed by Cognitive Behavioural Theory, these enablers are framed within the three phases involved in self-regulation misalignment: pre-actional, actional, and post-actional. To motivate this research, we first analyzed 77 capstone-project reports. We found that students’ Gantt charts (1) fail to acknowledge review meetings (70%) and milestones (100%) and (2) suffer deviations from the initial planned effort (16.28%). On these grounds, we develop GanttBot, a Telegram chatbot that is configured from the student’s Gantt diagram. GanttBot reminds students about close landmarks, it informs tutors when intervention might be required, and it learns from previous projects about common pitfalls, advising students accordingly. For evaluation purposes, course 17/18 acts as the control group ( N=28 ) while course 18/19 acts as the treatment group ( N=25 students). Using “overdue days” as the proxy for procrastination, results indicate that course 17/18 accounted for an average of 19 days of delay (SD = 5), whereas these days go down to 10 for the intervention group in course 18/19 (SD = 4). GanttBot is available for public usage as a Telegram chatbot.


2022 ◽  
Vol 11 (1) ◽  
pp. 1-27
Author(s):  
Frank Kaptein ◽  
Bernd Kiefer ◽  
Antoine Cully ◽  
Oya Celiktutan ◽  
Bert Bierman ◽  
...  

Making the transition to long-term interaction with social-robot systems has been identified as one of the main challenges in human-robot interaction. This article identifies four design principles to address this challenge and applies them in a real-world implementation: cloud-based robot control, a modular design, one common knowledge base for all applications, and hybrid artificial intelligence for decision making and reasoning. The control architecture for this robot includes a common Knowledge-base (ontologies), Data-base, “Hybrid Artificial Brain” (dialogue manager, action selection and explainable AI), Activities Centre (Timeline, Quiz, Break and Sort, Memory, Tip of the Day, \ldots ), Embodied Conversational Agent (ECA, i.e., robot and avatar), and Dashboards (for authoring and monitoring the interaction). Further, the ECA is integrated with an expandable set of (mobile) health applications. The resulting system is a Personal Assistant for a healthy Lifestyle (PAL), which supports diabetic children with self-management and educates them on health-related issues (48 children, aged 6–14, recruited via hospitals in the Netherlands and in Italy). It is capable of autonomous interaction “in the wild” for prolonged periods of time without the need for a “Wizard-of-Oz” (up until 6 months online). PAL is an exemplary system that provides personalised, stable and diverse, long-term human-robot interaction.


2022 ◽  
Vol 11 (1) ◽  
pp. 1-27
Author(s):  
Luis F. C. Figueredo ◽  
Rafael De Castro Aguiar ◽  
Lipeng Chen ◽  
Thomas C. Richards ◽  
Samit Chakrabarty ◽  
...  

This work addresses the problem of planning a robot configuration and grasp to position a shared object during forceful human-robot collaboration, such as a puncturing or a cutting task. Particularly, our goal is to find a robot configuration that positions the jointly manipulated object such that the muscular effort of the human, operating on the same object, is minimized while also ensuring the stability of the interaction for the robot. This raises three challenges. First, we predict the human muscular effort given a human-robot combined kinematic configuration and the interaction forces of a task. To do this, we perform task-space to muscle-space mapping for two different musculoskeletal models of the human arm. Second, we predict the human body kinematic configuration given a robot configuration and the resulting object pose in the workspace. To do this, we assume that the human prefers the body configuration that minimizes the muscular effort. And third, we ensure that, under the forces applied by the human, the robot grasp on the object is stable and the robot joint torques are within limits. Addressing these three challenges, we build a planner that, given a forceful task description, can output the robot grasp on an object and the robot configuration to position the shared object in space. We quantitatively analyze the performance of the planner and the validity of our assumptions. We conduct experiments with human subjects to measure their kinematic configurations, muscular activity, and force output during collaborative puncturing and cutting tasks. The results illustrate the effectiveness of our planner in reducing the human muscular load. For instance, for the puncturing task, our planner is able to reduce muscular load by 69.5\% compared to a user-based selection of object poses.


2022 ◽  
Vol 15 (1) ◽  
pp. 1-21
Author(s):  
Chen Wu ◽  
Mingyu Wang ◽  
Xinyuan Chu ◽  
Kun Wang ◽  
Lei He

Low-precision data representation is important to reduce storage size and memory access for convolutional neural networks (CNNs). Yet, existing methods have two major limitations: (1) requiring re-training to maintain accuracy for deep CNNs and (2) needing 16-bit floating-point or 8-bit fixed-point for a good accuracy. In this article, we propose a low-precision (8-bit) floating-point (LPFP) quantization method for FPGA-based acceleration to overcome the above limitations. Without any re-training, LPFP finds an optimal 8-bit data representation with negligible top-1/top-5 accuracy loss (within 0.5%/0.3% in our experiments, respectively, and significantly better than existing methods for deep CNNs). Furthermore, we implement one 8-bit LPFP multiplication by one 4-bit multiply-adder and one 3-bit adder, and therefore implement four 8-bit LPFP multiplications using one DSP48E1 of Xilinx Kintex-7 family or DSP48E2 of Xilinx Ultrascale/Ultrascale+ family, whereas one DSP can implement only two 8-bit fixed-point multiplications. Experiments on six typical CNNs for inference show that on average, we improve throughput by over existing FPGA accelerators. Particularly for VGG16 and YOLO, compared to six recent FPGA accelerators, we improve average throughput by 3.5 and 27.5 and average throughput per DSP by 4.1 and 5 , respectively.


2022 ◽  
Vol 11 (1) ◽  
pp. 1-42
Author(s):  
Ruisen Liu ◽  
Manisha Natarajan ◽  
Matthew C. Gombolay

As robots become ubiquitous in the workforce, it is essential that human-robot collaboration be both intuitive and adaptive. A robot’s ability to coordinate team activities improves based on its ability to infer and reason about the dynamic (i.e., the “learning curve”) and stochastic task performance of its human counterparts. We introduce a novel resource coordination algorithm that enables robots to schedule team activities by (1) actively characterizing the task performance of their human teammates and (2) ensuring the schedule is robust to temporal constraints given this characterization. We first validate our modeling assumptions via user study. From this user study, we create a data-driven prior distribution over human task performance for our virtual and physical evaluations of human-robot teaming. Second, we show that our methods are scalable and produce high-quality schedules. Third, we conduct a between-subjects experiment (n = 90) to assess the effects on a human-robot team of a robot scheduler actively exploring the humans’ task proficiency. Our results indicate that human-robot working alliance ( p\lt 0.001 ) and human performance ( p=0.00359 ) are maximized when the robot dedicates more time to exploring the capabilities of human teammates.


2022 ◽  
Vol 15 (1) ◽  
pp. 1-35
Author(s):  
Vladimir Rybalkin ◽  
Jonas Ney ◽  
Menbere Kina Tekleyohannes ◽  
Norbert Wehn

Multidimensional Long Short-Term Memory (MD-LSTM) neural network is an extension of one-dimensional LSTM for data with more than one dimension. MD-LSTM achieves state-of-the-art results in various applications, including handwritten text recognition, medical imaging, and many more. However, its implementation suffers from the inherently sequential execution that tremendously slows down both training and inference compared to other neural networks. The main goal of the current research is to provide acceleration for inference of MD-LSTM. We advocate that Field-Programmable Gate Array (FPGA) is an alternative platform for deep learning that can offer a solution when the massive parallelism of GPUs does not provide the necessary performance required by the application. In this article, we present the first hardware architecture for MD-LSTM. We conduct a systematic exploration to analyze a tradeoff between precision and accuracy. We use a challenging dataset for semantic segmentation, namely historical document image binarization from the DIBCO 2017 contest and a well-known MNIST dataset for handwritten digit recognition. Based on our new architecture, we implement FPGA-based accelerators that outperform Nvidia Geforce RTX 2080 Ti with respect to throughput by up to 9.9 and Nvidia Jetson AGX Xavier with respect to energy efficiency by up to 48 . Our accelerators achieve higher throughput, energy efficiency, and resource efficiency than FPGA-based implementations of convolutional neural networks (CNNs) for semantic segmentation tasks. For the handwritten digit recognition task, our FPGA implementations provide higher accuracy and can be considered as a solution when accuracy is a priority. Furthermore, they outperform earlier FPGA implementations of one-dimensional LSTMs with respect to throughput, energy efficiency, and resource efficiency.


2022 ◽  
Vol 27 (2) ◽  
pp. 1-25
Author(s):  
Somesh Singh ◽  
Tejas Shah ◽  
Rupesh Nasre

Betweenness centrality (BC) is a popular centrality measure, based on shortest paths, used to quantify the importance of vertices in networks. It is used in a wide array of applications including social network analysis, community detection, clustering, biological network analysis, and several others. The state-of-the-art Brandes’ algorithm for computing BC has time complexities of and for unweighted and weighted graphs, respectively. Brandes’ algorithm has been successfully parallelized on multicore and manycore platforms. However, the computation of vertex BC continues to be time-consuming for large real-world graphs. Often, in practical applications, it suffices to identify the most important vertices in a network; that is, those having the highest BC values. Such applications demand only the top vertices in the network as per their BC values but do not demand their actual BC values. In such scenarios, not only is computing the BC of all the vertices unnecessary but also exact BC values need not be computed. In this work, we attempt to marry controlled approximations with parallelization to estimate the k -highest BC vertices faster, without having to compute the exact BC scores of the vertices. We present a host of techniques to determine the top- k vertices faster , with a small inaccuracy, by computing approximate BC scores of the vertices. Aiding our techniques is a novel vertex-renumbering scheme to make the graph layout more structured , which results in faster execution of parallel Brandes’ algorithm on GPU. Our experimental results, on a suite of real-world and synthetic graphs, show that our best performing technique computes the top- k vertices with an average speedup of 2.5× compared to the exact parallel Brandes’ algorithm on GPU, with an error of less than 6%. Our techniques also exhibit high precision and recall, both in excess of 94%.


2022 ◽  
Vol 16 (2) ◽  
pp. 1-28
Author(s):  
Liang Zhao ◽  
Yuyang Gao ◽  
Jieping Ye ◽  
Feng Chen ◽  
Yanfang Ye ◽  
...  

The forecasting of significant societal events such as civil unrest and economic crisis is an interesting and challenging problem which requires both timeliness, precision, and comprehensiveness. Significant societal events are influenced and indicated jointly by multiple aspects of a society, including its economics, politics, and culture. Traditional forecasting methods based on a single data source find it hard to cover all these aspects comprehensively, thus limiting model performance. Multi-source event forecasting has proven promising but still suffers from several challenges, including (1) geographical hierarchies in multi-source data features, (2) hierarchical missing values, (3) characterization of structured feature sparsity, and (4) difficulty in model’s online update with incomplete multiple sources. This article proposes a novel feature learning model that concurrently addresses all the above challenges. Specifically, given multi-source data from different geographical levels, we design a new forecasting model by characterizing the lower-level features’ dependence on higher-level features. To handle the correlations amidst structured feature sets and deal with missing values among the coupled features, we propose a novel feature learning model based on an N th-order strong hierarchy and fused-overlapping group Lasso. An efficient algorithm is developed to optimize model parameters and ensure global optima. More importantly, to enable the model update in real time, the online learning algorithm is formulated and active set techniques are leveraged to resolve the crucial challenge when new patterns of missing features appear in real time. Extensive experiments on 10 datasets in different domains demonstrate the effectiveness and efficiency of the proposed models.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-19
Author(s):  
Feng Lu ◽  
Wei Li ◽  
Song Lin ◽  
Chengwangli Peng ◽  
Zhiyong Wang ◽  
...  

Wireless capsule endoscopy is a modern non-invasive Internet of Medical Imaging Things that has been increasingly used in gastrointestinal tract examination. With about one gigabyte image data generated for a patient in each examination, automatic lesion detection is highly desirable to improve the efficiency of the diagnosis process and mitigate human errors. Despite many approaches for lesion detection have been proposed, they mainly focus on large lesions and are not directly applicable to tiny lesions due to the limitations of feature representation. As bleeding lesions are a common symptom in most serious gastrointestinal diseases, detecting tiny bleeding lesions is extremely important for early diagnosis of those diseases, which is highly relevant to the survival, treatment, and expenses of patients. In this article, a method is proposed to extract and fuse multi-scale deep features for detecting and locating both large and tiny lesions. A feature extracting network is first used as our backbone network to extract the basic features from wireless capsule endoscopy images, and then at each layer multiple regions could be identified as potential lesions. As a result, the features maps of those potential lesions are obtained at each level and fused in a top-down manner to the fully connected layer for producing final detection results. Our proposed method has been evaluated on a clinical dataset that contains 20,000 wireless capsule endoscopy images with clinical annotation. Experimental results demonstrate that our method can achieve 98.9% prediction accuracy and 93.5% score, which has a significant performance improvement of up to 31.69% and 22.12% in terms of recall rate and score, respectively, when compared to the state-of-the-art approaches for both large and tiny bleeding lesions. Moreover, our model also has the highest AP and the best medical diagnosis performance compared to state-of-the-art multi-scale models.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-18
Author(s):  
Anna Lito Michala ◽  
Ioannis Vourganas ◽  
Andrea Coraddu

IoT and the Cloud are among the most disruptive changes in the way we use data today. These changes have not significantly influenced practices in condition monitoring for shipping. This is partly due to the cost of continuous data transmission. Several vessels are already equipped with a network of sensors. However, continuous monitoring is often not utilised and onshore visibility is obscured. Edge computing is a promising solution but there is a challenge sustaining the required accuracy for predictive maintenance. We investigate the use of IoT systems and Edge computing, evaluating the impact of the proposed solution on the decision making process. Data from a sensor and the NASA-IMS open repository were used to show the effectiveness of the proposed system and to evaluate it in a realistic maritime application. The results demonstrate our real-time dynamic intelligent reduction of transmitted data volume by without sacrificing specificity or sensitivity in decision making. The output of the Decision Support System fully corresponds to the monitored system's actual operating condition and the output when the raw data are used instead. The results demonstrate that the proposed more efficient approach is just as effective for the decision making process.


Sign in / Sign up

Export Citation Format

Share Document