Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning.
Approximate arithmetic circuits are an attractive alternative to accurate arithmetic circuits because they have significantly reduced delay, area, and power, albeit at the cost of some loss in accuracy. By keeping errors due to approximate computation within acceptable limits, approximate arithmetic circuits can be used for various practical applications such as digital signal processing, digital filtering, low power graphics processing, neuromorphic computing, hardware realization of neural networks for artificial intelligence and machine learning etc. The degree of approximation that can be incorporated into an approximate arithmetic circuit tends to vary depending on the error resiliency of the target application. Given this, the manual coding of approximate arithmetic circuits corresponding to different degrees of approximation in a hardware description language (HDL) may be a cumbersome and a time-consuming process—more so when the circuit is big. Therefore, a software tool that can automatically generate approximate arithmetic circuits of any size corresponding to a desired accuracy would not only aid the design flow but also help to improve a designer’s productivity by speeding up the circuit/system development. In this context, this paper presents ‘Approximator’, which is a software tool developed to automatically generate approximate arithmetic circuits based on a user’s specification. Approximator can automatically generate Verilog HDL codes of approximate adders and multipliers of any size based on the novel approximate arithmetic circuit architectures proposed by us. The Verilog HDL codes output by Approximator can be used for synthesis in an FPGA or ASIC (standard cell based) design environment. Additionally, the tool can perform error and accuracy analyses of approximate arithmetic circuits. The salient features of the tool are illustrated through some example screenshots captured during different stages of the tool use. Approximator has been made open-access on GitHub for the benefit of the research community, and the tool documentation is provided for the user’s reference.
The application of mass collaboration in different areas of study and work has been increasing over the last few decades. For example, in the education context, this emerging paradigm has opened new opportunities for participatory learning, namely, “mass collaborative learning (MCL)”. The development of such an innovative and complementary method of learning, which can lead to the creation of knowledge-based communities, has helped to reap the benefits of diversity and inclusion in the creation and development of knowledge. In other words, MCL allows for enhanced connectivity among the people involved, providing them with the opportunity to practice learning collectively. Despite recent advances, this area still faces many challenges, such as a lack of common agreement about the main concepts, components, applicable structures, relationships among the participants, as well as applicable assessment systems. From this perspective, this study proposes a meta-governance framework that benefits from various other related ideas, models, and methods that together can better support the implementation, execution, and development of mass collaborative learning communities. The proposed framework was applied to two case-study projects in which vocational education and training respond to the needs of collaborative education–enterprise approaches. It was also further used in an illustration of the MCL community called the “community of cooks”. Results from these application cases are discussed.
Deep learning has surged in popularity in recent years, notably in the domains of medical image processing, medical image analysis, and bioinformatics. In this study, we offer a completely autonomous brain tumour segmentation approach based on deep neural networks (DNNs). We describe a unique CNN architecture which varies from those usually used in computer vision. The classification of tumour cells is very difficult due to their heterogeneous nature. From a visual learning and brain tumour recognition point of view, a convolutional neural network (CNN) is the most extensively used machine learning algorithm. This paper presents a CNN model along with parametric optimization approaches for analysing brain tumour magnetic resonance images. The accuracy percentage in the simulation of the above-mentioned model is exactly 100% throughout the nine runs, i.e., Taguchi’s L9 design of experiment. This comparative analysis of all three algorithms will pique the interest of readers who are interested in applying these techniques to a variety of technical and medical challenges. In this work, the authors have tuned the parameters of the convolutional neural network approach, which is applied to the dataset of Brain MRIs to detect any portion of a tumour, through new advanced optimization techniques, i.e., SFOA, FBIA and MGA.
The problem of the electrical characterization of single-phase transformers is addressed in this research through the application of the crow search algorithm (CSA). A nonlinear programming model to determine the series and parallel impedances of the transformer is formulated using the mean square error (MSE) between the voltages and currents measured and calculated as the objective function. The CSA is selected as a solution technique since it is efficient in dealing with complex nonlinear programming models using penalty factors to explore and exploit the solution space with minimum computational effort. Numerical results in three single-phase transformers with nominal sizes of 20 kVA, 45 kVA, 112.5 kVA, and 167 kVA demonstrate the efficiency of the proposed approach to define the transformer parameters when compared with the large-scale nonlinear solver fmincon in the MATLAB programming environment. Regarding the final objective function value, the CSA reaches objective functions lower than 2.75×10−11 for all the simulation cases, which confirms their effectiveness in minimizing the MSE between real (measured) and expected (calculated) voltage and current variables in the transformer.
Any cancer type is one of the leading death causes around the world. Skin cancer is a condition where malignant cells are formed in the tissues of the skin, such as melanoma, known as the most aggressive and deadly skin cancer type. The mortality rates of melanoma are associated with its high potential for metastasis in later stages, spreading to other body sites such as the lungs, bones, or the brain. Thus, early detection and diagnosis are closely related to survival rates. Computer Aided Design (CAD) systems carry out a pre-diagnosis of a skin lesion based on clinical criteria or global patterns associated with its structure. A CAD system is essentially composed by three modules: (i) lesion segmentation, (ii) feature extraction, and (iii) classification. In this work, a methodology is proposed for a CAD system development that detects global patterns using texture descriptors based on statistical measurements that allow melanoma detection from dermoscopic images. Image analysis was carried out using spatial domain methods, statistical measurements were used for feature extraction, and a classifier based on cellular automata (ACA) was used for classification. The proposed model was applied to dermoscopic images obtained from the PH2 database, and it was compared with other models using accuracy, sensitivity, and specificity as metrics. With the proposed model, values of 0.978, 0.944, and 0.987 of accuracy, sensitivity and specificity, respectively, were obtained. The results of the evaluated metrics show that the proposed method is more effective than other state-of-the-art methods for melanoma detection in dermoscopic images.
Manufacturing industries based on Internet of Things (IoT) technologies play an important role in the economic development of intelligent agriculture and watering. Water availability has become a global problem that afflicts many countries, especially in remote and desert areas. An efficient irrigation system is needed for optimizing the amount of water consumption, agriculture monitoring, and reducing energy costs. This paper proposes a real-time monitoring and auto-watering system based on predicting mathematical models that efficiently control the water rate needed. It gives the plant the optimal amount of required water level, which helps to save water. It also ensures interoperability among heterogeneous sensing data streams to support large-scale agricultural analytics. The mathematical model is embedded in the Arduino Integrated Development Environment (IDE) for sensing the soil moisture level and checking whether it is less than the pre-defined threshold value, then plant watering is performed automatically. The proposed system enhances the watering system’s efficiency by reducing the water consumption by more than 70% and increasing production due to irrigation optimization. It also reduces the water and energy consumption amount and decreases the maintenance costs.
Pressure ulcers are a critical issue not only for patients, decreasing their quality of life, but also for healthcare professionals, contributing to burnout from continuous monitoring, with a consequent increase in healthcare costs. Due to the relevance of this problem, many hardware and software approaches have been proposed to ameliorate some aspects of pressure ulcer prevention and monitoring. In this article, we focus on reviewing solutions that use sensor-based data, possibly in combination with other intrinsic or extrinsic information, processed by some form of intelligent algorithm, to provide healthcare professionals with knowledge that improves the decision-making process when dealing with a patient at risk of developing pressure ulcers. We used a systematic approach to select 21 studies that were thoroughly reviewed and summarized, considering which sensors and algorithms were used, the most relevant data features, the recommendations provided, and the results obtained after deployment. This review allowed us not only to describe the state of the art regarding the previous items, but also to identify the three main stages where intelligent algorithms can bring meaningful improvement to pressure ulcer prevention and mitigation. Finally, as a result of this review and following discussion, we drew guidelines for a general architecture of an intelligent pressure ulcer prevention system.
This work explores the suitability of data treatment methodologies for Raman spectra of teeth using multivariate analysis methods. Raman spectra were measured in our laboratory and obtained from control enamel samples and samples with a protective treatment before and after an erosive attack. Three different approaches for data treatment were undertaken in order to evaluate the aptitude of distinguishing between groups: A—Principal Component Analysis of the numerical parameters derived from deconvoluted spectra; B—PCA of average Raman spectra after baseline correction; and C—PCA of average raw Raman spectra. Additionally, Hierarchical Cluster Analysis were applied to Raman spectra of enamel measured with different laser wavelengths (638 nm or 785 nm) to evaluate the most suitable choice of illumination. According to the different approaches, PC1 scores obtained between control and treatment group were A—50.5%, B—97.1% and C—83.0% before the erosive attack and A—55.2%, B—93.2% and C—87.8% after an erosive attack. The obtained results showed that performing PCA analysis of raw or baseline corrected Raman spectra of enamel was not as efficient in the evaluation of samples with different treatments. Moreover, acquiring Raman spectra with a 785 nm laser increases precision in the data treatment methodologies.
Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main features: (i) guided learning to teach users how to write the alphabet and (ii) on-screen and AR-based handwriting testing using machine learning. A learner needs to write on the mobile screen in on-screen testing, whereas AR-based testing allows one to evaluate writing on paper or a board in a real world environment. We implement a novel approach to use machine learning for AR-based testing to detect an alphabet written on a board or paper. It detects the handwritten alphabet using our developed machine learning model. After that, a 3D model of that alphabet appears on the screen with its pronunciation/sound. The key benefit of our approach is that it allows the learner to use a handwritten alphabet. As we have used marker-less augmented reality, it does not require a static image as a marker. The app was built with ARCore SDK for Unity. We further evaluated and quantified the performance of our app on multiple devices.