Tertiary wavelet model based automatic epilepsy classification system

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Satyender Jaglan ◽  
Sanjeev Kumar Dhull ◽  
Krishna Kant Singh

PurposeThis work proposes a tertiary wavelet model based automatic epilepsy classification system using electroencephalogram (EEG) signals.Design/methodology/approachIn this paper, a three-stage system has been proposed for automated classification of epilepsy signals. In the first stage, a tertiary wavelet model uses the orthonormal M-band wavelet transform. This model decomposes EEG signals into three bands of different frequencies. In the second stage, the decomposed EEG signals are analyzed to find novel statistical features. The statistical values of the features are demonstrated using multi-parameters graph comparing normal and epileptic signals. In the last stage, the features are inputted to different conventional classifiers that classify pre-ictal, inter-ictal (epileptic with seizure-free interval) and ictal (seizure) EEG segments.FindingsFor the proposed system the performance of five different classifiers, namely, KNN, DT, XGBoost, SVM and RF is evaluated for the University of BONN data set using different performance parameters. It is observed that RF classifier gives the best performance among the above said classifiers, with an average accuracy of 99.47%.Originality/valueEpilepsy is a neurological condition in which two or more spontaneous seizures occur repeatedly. EEG signals are widely used and it is an important method for detecting epilepsy. EEG signals contain information about the brain's electrical activity. Clinicians manually examine the EEG waveforms to detect epileptic anomalies, which is a time-consuming and error-prone process. An automated epilepsy classification system is proposed in this paper based on combination of signal processing (tertiary wavelet model) and novel features-based classification using the EEG signals.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yaw Owusu-Agyeman ◽  
Enna Moroeroe

PurposeScholarly studies on student engagement are mostly focused on the perceptions of students and academic staff of higher education institutions (HEIs) with a few studies concentrating on the perspectives of professional staff. To address this knowledge gap, this paper aims to examine how professional staff who are members of a professional community perceive their contributions to enhancing student engagement in a university.Design/methodology/approachData for the current study were gathered using semi-structured face-to-face interviews among 41 professional staff who were purposively sampled from a public university in South Africa. The data gathered were analysed using thematic analysis that involved a process of identifying, analysing, organising, describing and reporting the themes that emerged from the data set.FindingsAn analysis of the narrative data revealed that when professional staff provide students with prompt feedback, support the development of their social and cultural capital and provide professional services in the area of teaching and learning, they foster student engagement in the university. However, the results showed that poor communication flow and delays in addressing students’ concerns could lead to student disengagement. The study further argues that through continuous interaction and shared norms and values among members of a professional community, a service culture can be developed to address possible professional knowledge and skills gaps that constrain quality service delivery.Originality/valueThe current paper contributes to the scholarly discourse on student engagement and professional community by showing that a service culture of engagement is developed among professional staff when they share ideas, collaborate and build competencies to enhance student engagement. Furthermore, the collaboration between professional staff and academics is important to addressing the academic issues that confront students in the university.


2019 ◽  
Vol 12 (1) ◽  
pp. 73-94
Author(s):  
Pragya Arya ◽  
Manoj Kumar Srivastava ◽  
Mahadeo P. Jaiswal

Purpose Research on sustainability has progressed from a singular focus on one aspect to a simultaneous focus on more than one aspect of the triple bottom line. However, there is a dearth of research that explains why sustainability-related decisions in business often do not bear the expected results. Research that provides managers with a tool to achieve environmental sustainability of logistics without compromising the economic sustainability is scarce. Hence, the purpose of this paper is to bridge the above gaps and to explore the factors that affect investment in technology to balance environmental and economic sustainability of logistics. A model based on system dynamics approach explains the simultaneous interplay of these factors. Simulating the model helps the managers of logistics function decide the size of investment in technology, to achieve environmental efficiency without negatively influencing the economic performance. Design/methodology/approach A model based on system dynamics approach explains the simultaneous interplay of these factors. Simulating the model helps the managers of logistics function decide the size of investment in technology, to achieve ecological efficiency without compromising with the economic performance. Findings Collaboration with regulatory authorities and with players within the same industry and across industries is a must so that eco-logistics does not become an economic burden for businesses. The decision to invest in technology for eco-logistics is further accentuated if the technology promises some added economic benefits. Research limitations/implications From a theoretical perspective, the research has added to the less extensive literature on system dynamics modelling, which is a mixed methodology, combining both qualitative and quantitative techniques. The research is also one of the few attempts that have attempted to simultaneously study more than one aspects of sustainability in business, quantitatively through simulation. Simulation was demonstrated through a single case study, Future works can aim to apply the causal loop diagram to firms in varied sectors. Practical implications The managers can use the causal loop diagram to assess the environmental performance of logistics and decide on appropriate level of investment to balance ecological and economic performance of logistics. Originality/value The causal loop diagram has been developed through primary data collection via semi-structured interviews. The results were validated by presenting them to respondents to ensure they represent their view points. The results are, therefore, practical and original. This research does not build upon an existing data set or aims to test the applicability of any existing model. The model for this research has been developed from the grass-roots level.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1234
Author(s):  
Lingyun Zhang ◽  
Taorong Qiu ◽  
Zhiqiang Lin ◽  
Shuli Zou ◽  
Xiaoming Bai

Functional brain network (FBN) is an intuitive expression of the dynamic neural activity interaction between different neurons, neuron clusters, or cerebral cortex regions. It can characterize the brain network topology and dynamic properties. The method of building an FBN to characterize the features of the brain network accurately and effectively is a challenging subject. Entropy can effectively describe the complexity, non-linearity, and uncertainty of electroencephalogram (EEG) signals. As a relatively new research direction, the research of the FBN construction method based on EEG data of fatigue driving has broad prospects. Therefore, it is of great significance to study the entropy-based FBN construction. We focus on selecting appropriate entropy features to characterize EEG signals and construct an FBN. On the real data set of fatigue driving, FBN models based on different entropies are constructed to identify the state of fatigue driving. Through analyzing network measurement indicators, the experiment shows that the FBN model based on fuzzy entropy can achieve excellent classification recognition rate and good classification stability. In addition, when compared with the other model based on the same data set, our model could obtain a higher accuracy and more stable classification results even if the length of the intercepted EEG signal is different.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yuan George Shan ◽  
Junru Zhang ◽  
Manzurul Alam ◽  
Phil Hancock

Purpose This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability reporting is an established area of investigation, prior research has paid inadequate attention to the nexus of university ranking and sustainability reporting. Design/methodology/approach This study covers 46 Australian and New Zealand universities and uses a data set, which includes sustainability reports and disclosures from four reporting channels including university websites, and university archives, between 2005 and 2018. Ordinary least squares regression was used with Pearson and Spearman’s rank correlations to investigate the likelihood of multi-collinearity and the paper also calculated the variance inflation factor values. Finally, this study uses the generalized method of moments approach to test for endogeneity. Findings The findings suggest that sustainability reporting is significantly and positively associated with university ranking and confirm that the four reporting channels play a vital role when communicating with university stakeholders. Further, this paper documents that sustainability reporting through websites, in addition to the annual report and a separate environment report have a positive impact on the university ranking systems. Originality/value This paper contributes to extant knowledge on the link between university rankings and university sustainability reporting which is considered a vital communication vehicle to meet the expectation of the stakeholder in relevance with the university rankings.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mohanad Halaweh

Purpose This paper aims to propose a new metric, called the Research Productivity Index (RPI), which can be used to measure universities’ research productivity and benchmark them accordingly at both national and global levels. Design/methodology/approach The paper used a partial-factor productivity measurement as the basis to develop RPI, which considers the ratio of total weighted publications (outputs) to the used input (affiliated researchers). To demonstrate the applicability of the RPI, data were collected from Scopus to assess the research productivity of a university in the UAE, as an example. The methodological steps (algorithm) were demonstrated using mathematical and query functions to extract the required data from the Scopus data set and then compute the RPI value. Findings A new effective and objective metric was developed for measuring universities’ research productivity. Practical Implications This paper suggests that Scopus could use RPI as a metric for measuring the research productivity of each university. RPI can be used by university administrators and government decision-makers to evaluate and rank/benchmark institutions’ research productivity. They can consequently make more effective decisions with regard to the efficient allocation of research budgets and funding. Originality/value This paper distinguishes between measuring research impact and research productivity. It proposes RPI for measuring the latter, whereas most existing metrics measure the former. RPI is an objective measurement, as it is calculated based on a constant period of time, three years, and takes into consideration the university size (i.e. affiliated researchers) in addition to the quality and quantity (total) of research outcomes.


2014 ◽  
Vol 3 (1) ◽  
pp. 160-176 ◽  
Author(s):  
Arndt Lautenschläger ◽  
Heiko Haase ◽  
Jan Kratzer

Purpose – The purpose of this paper is to investigate contingency factors on the emergence of university spin-off firms. The institutional and organisational factors the paper explores comprise the transfer potential of the university, the strategy and characteristics of the University Technology Transfer Organisations and specific support for spin-off formation. Design/methodology/approach – Based on a unique data set, this cross-sectional study analyses the population of 54 higher education institutions in Germany. At this, 31.4 per cent of the German universities with technology transfer activities participated in this study. Findings – The research identifies a high degree of heterogeneity in the qualification of University Technology Transfer Offices (UTTO) staff and the existence of an entrepreneurship support programme as important antecedents of spin-off formation. In addition, the results reveal that pursuing different or multiple transfer strategies will not be detrimental to the establishment of spin-offs. Practical implications – It seems that there is still a lack of consensus with respect to the importance of spin-offs as an effective channel to transform research results into economic value. Furthermore, universities aiming at the promotion of spin-offs need appropriate regulations which do not jeopardise the usage of research outcomes for entrepreneurial purposes. Originality/value – This study contributes to enhance the knowledge on what promotes and inhibits the formation of university spin-off firms, as it first analyses a considerable population of UTTOs in Germany and explicitly considers underexplored and new contingency factors.


2016 ◽  
Vol 17 (2) ◽  
pp. 203-210 ◽  
Author(s):  
Margie Jantti ◽  
Jennifer Heath

Purpose – The purpose of this paper is to provide an overview of the development of an institution wide approach to learning analytics at the University of Wollongong (UOW) and the inclusion of library data drawn from the Library Cube. Design/methodology/approach – The Student Support and Education Analytics team at UOW is tasked with creating policy, frameworks and infrastructure for the systematic capture, mapping and analysis of data from the across the university. The initial data set includes: log file data from Moodle sites, Library Cube, student administration data, tutorials and student support service usage data. Using the learning analytics data warehouse UOW is developing new models for analysis and visualisation with a focus on the provision of near real-time data to academic staff and students to optimise learning opportunities. Findings – The distinct advantage of the learning analytics model is that the selected data sets are updated weekly, enabling near real-time monitoring and intervention where required. Inclusion of library data with the other often disparate data sets from across the university has enabled development of a comprehensive platform for learning analytics. Future work will include the development of predictive models using the rapidly growing learning analytics data warehouse. Practical implications – Data warehousing infrastructure, the systematic capture and exporting of relevant library data sets are requisite for the consideration of library data in learning analytics. Originality/value – What was not anticipated five years ago when the Value Cube was first realised, was the development of learning analytic services at UOW. The Cube afforded University of Wollongong Library considerable advantage: the framework for data harvesting and analysis was established, ready for inclusion within learning analytics data sets and subsequent reporting to faculty.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shruti Garg ◽  
Rahul Kumar Patro ◽  
Soumyajit Behera ◽  
Neha Prerna Tigga ◽  
Ranjita Pandey

PurposeThe purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.Design/methodology/approachClassical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.FindingsThe two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.Originality/valueThe present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.


2019 ◽  
Vol 39 (4) ◽  
pp. 685-695
Author(s):  
Guijiang Duan ◽  
Zhibang Shen ◽  
Rui Liu

Purpose This paper aims to promote the integration of the relative position accuracy (RPA) measurement and evaluation in digital assembly process by adopting the model-based method. An integrated framework for RPA measurement is proposed based on a model-based definition (MBD) data set. The study also aims to promote the efficiency of inspection planning of RPA measurement by improving the reusability and configurability of the inspection planning. Design/methodology/approach The works have been carried out on three layers. In the data layer, an extended MBD data set is constructed to describe the objects and data for defining RPA measurement items; In definition layer, a model based and hierarchical structure for RPA item definition is constructed to support quick definition for RPA measurement items. In function layer, a toolset consisting three modules is constructed in a sequence from measurement planning to RPA value solving to visualized displaying again. Based on this framework, a prototype system is developed. Findings The paper provides an identified practice of model-based inspection. It suggests that MBD is valuable in promoting both the integration and efficiency of digital inspection. Research limitations/implications The templates and constructed geometry objects given in this paper are still limited in a scenario of aircraft assembly. The integrity and universality of them still need follow-up works. Practical implications The paper includes implications for the model based digital inspection, the digital assembly and the extended application of MBD. Originality/value This paper expands the application of MBD in inspection and fulfils the need to promote the integration and efficiency of digital inspection in large-scale component assembly.


Sign in / Sign up

Export Citation Format

Share Document