Qualitative Analysis for Visual Attention of Students by Using the Technology of ICT

Author(s):  
Muhammad Farhan ◽  
Muhammad Salar Haider ◽  
N. Z. Jhanjhi ◽  
Rana Muhammad Amir Latif ◽  
Muhammad Yasir Bilal

ICT tools and machine learning tools are used to analyze the visual attention of the student. The student's attention score is calculated for the analysis of the visual attention of the student. For this purpose, the authors have developed a software package (i.e., Visual Attention Tool [VAT]) based on the ICT that extracts the frames from a video stream that is taken through the webcam attached to the student's laptop. Each image is converted into a grayscale image, enhanced by image processing, then face detection is performed by following eye detection. This real-time processing of video produces a dataset by tracking the faces and eyes. It measures the attention level of the student with the timestamp. A manual observer also calculates the student's attention score focusing face and eye contact and produces a dataset manually. Then a comparative analysis of both datasets is performed in statistical and machine learning tools.

2021 ◽  
Author(s):  
Marcin Kowalczyk ◽  
Tomasz Kryjak

This work describes the hardware implementation of a connected component labelling (CCL) module in reprogammable logic. The main novelty of the design is the ``full'', i.e. without any simplifications, support of a 4 pixel per clock format (4 ppc) and real-time processing of a 4K/UltraHD video stream (3840 x 2160 pixels) at 60 frames per second. To achieve this, a special labelling method was designed and a functionality that stops the input data stream in order to process pixel groups which require writing more than one merger into the equivalence table. The proposed module was verified in simulation and in hardware on the Xilinx Zynq Ultrascale+ MPSoC chip on the ZCU104 evaluation board.


Author(s):  
Nikolaus Bee ◽  
Helmut Prendinger ◽  
Arturo Nakasone ◽  
Elisabeth André ◽  
Mitsuru Ishizuka

2021 ◽  
Author(s):  
Marcin Kowalczyk ◽  
Tomasz Kryjak

This work describes the hardware implementation of a connected component labelling (CCL) module in reprogammable logic. The main novelty of the design is the ``full'', i.e. without any simplifications, support of a 4 pixel per clock format (4 ppc) and real-time processing of a 4K/UltraHD video stream (3840 x 2160 pixels) at 60 frames per second. To achieve this, a special labelling method was designed and a functionality that stops the input data stream in order to process pixel groups which require writing more than one merger into the equivalence table. The proposed module was verified in simulation and in hardware on the Xilinx Zynq Ultrascale+ MPSoC chip on the ZCU104 evaluation board.


Author(s):  
Marcin Kowalczyk ◽  
Piotr Ciarach ◽  
Dominika Przewlocka-Rus ◽  
Hubert Szolc ◽  
Tomasz Kryjak

AbstractIn this paper, a hardware implementation in reconfigurable logic of a single-pass connected component labelling (CCL) and connected component analysis (CCA) module is presented. The main novelty of the design is the support of a video stream in 2 and 4 pixel per clock format (2 and 4 ppc) and real-time processing of 4K/UHD video stream (3840 x 2160 pixels) at 60 frames per second. We discuss several approaches to the issue and present in detail the selected ones. The proposed module was verified in an exemplary application – skin colour areas segmentation – on the ZCU 102 and ZCU 104 evaluation boards equipped with Xilinx Zynq UltraScale+ MPSoC devices.


2019 ◽  
Vol 7 (4) ◽  
pp. 184-190
Author(s):  
Himani Maheshwari ◽  
Pooja Goswami ◽  
Isha Rana

Author(s):  
Daiki Matsumoto ◽  
Ryuji Hirayama ◽  
Naoto Hoshikawa ◽  
Hirotaka Nakayama ◽  
Tomoyoshi Shimobaba ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document