A Practical Solution for Non-Intrusive Type II Load Monitoring Based on Deep Learning and Post-Processing

2020 ◽  
Vol 11 (1) ◽  
pp. 148-160 ◽  
Author(s):  
Weicong Kong ◽  
Zhao Yang Dong ◽  
Bo Wang ◽  
Junhua Zhao ◽  
Jie Huang
2021 ◽  
pp. 1-1
Author(s):  
Minh H. Phan ◽  
Queen Nguyen ◽  
Son L. Phung ◽  
Wei Emma Zhang ◽  
Trung D. Vo ◽  
...  

Author(s):  
Halil Cimen ◽  
Emilio Jose Palacios-Garcia ◽  
Morten Kolbaek ◽  
Nurettin Cetinkaya ◽  
Juan C. Vasquez ◽  
...  

2021 ◽  
Author(s):  
Zubair Azim Miazi ◽  
Shahriar Jahan ◽  
Md. A. K. Niloy ◽  
Roknuzzaman ◽  
Anika Shama ◽  
...  

Author(s):  
Tobias M. Rasse ◽  
Réka Hollandi ◽  
Péter Horváth

AbstractVarious pre-trained deep learning models for the segmentation of bioimages have been made available as ‘developer-to-end-user’ solutions. They usually require neither knowledge of machine learning nor coding skills, are optimized for ease of use, and deployability on laptops. However, testing these tools individually is tedious and success is uncertain.Here, we present the ‘Op’en ‘Se’gmentation ‘F’ramework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts’ knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. All analyst tasks are optimized for deployment on Linux workstations or GPU clusters, all user tasks may be performed on any laptop in ImageJ.OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and post-processing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and post-processing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data.We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows.Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little, the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


2017 ◽  
Vol 2017 ◽  
pp. 1-22 ◽  
Author(s):  
Jihyun Kim ◽  
Thi-Thu-Huong Le ◽  
Howon Kim

Monitoring electricity consumption in the home is an important way to help reduce energy usage. Nonintrusive Load Monitoring (NILM) is existing technique which helps us monitor electricity consumption effectively and costly. NILM is a promising approach to obtain estimates of the electrical power consumption of individual appliances from aggregate measurements of voltage and/or current in the distribution system. Among the previous studies, Hidden Markov Model (HMM) based models have been studied very much. However, increasing appliances, multistate of appliances, and similar power consumption of appliances are three big issues in NILM recently. In this paper, we address these problems through providing our contributions as follows. First, we proposed state-of-the-art energy disaggregation based on Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model and additional advanced deep learning. Second, we proposed a novel signature to improve classification performance of the proposed model in multistate appliance case. We applied the proposed model on two datasets such as UK-DALE and REDD. Via our experimental results, we have confirmed that our model outperforms the advanced model. Thus, we show that our combination between advanced deep learning and novel signature can be a robust solution to overcome NILM’s issues and improve the performance of load identification.


2019 ◽  
Vol 18 (1) ◽  
Author(s):  
Meng Dai ◽  
Shuying Li ◽  
Yuanyuan Wang ◽  
Qi Zhang ◽  
Jinhua Yu

Abstract Background Improving imaging quality is a fundamental problem in ultrasound contrast agent imaging (UCAI) research. Plane wave imaging (PWI) has been deemed as a potential method for UCAI due to its’ high frame rate and low mechanical index. High frame rate can improve the temporal resolution of UCAI. Meanwhile, low mechanical index is essential to UCAI since microbubbles can be easily broken under high mechanical index conditions. However, the clinical practice of ultrasound contrast agent plane wave imaging (UCPWI) is still limited by poor imaging quality for lack of transmit focus. The purpose of this study was to propose and validate a new post-processing method that combined with deep learning to improve the imaging quality of UCPWI. The proposed method consists of three stages: (1) first, a deep learning approach based on U-net was trained to differentiate the microbubble and tissue radio frequency (RF) signals; (2) then, to eliminate the remaining tissue RF signals, the bubble approximated wavelet transform (BAWT) combined with maximum eigenvalue threshold was employed. BAWT can enhance the UCA area brightness, and eigenvalue threshold can be set to eliminate the interference areas due to the large difference of maximum eigenvalue between UCA and tissue areas; (3) finally, the accurate microbubble imaging were obtained through eigenspace-based minimum variance (ESBMV). Results The proposed method was validated by both phantom and in vivo rabbit experiment results. Compared with UCPWI based on delay and sum (DAS), the imaging contrast-to-tissue ratio (CTR) and contrast-to-noise ratio (CNR) was improved by 21.3 dB and 10.4 dB in the phantom experiment, and the corresponding improvements were 22.3 dB and 42.8 dB in the rabbit experiment. Conclusions Our method illustrates superior imaging performance and high reproducibility, and thus is promising in improving the contrast image quality and the clinical value of UCPWI.


Sign in / Sign up

Export Citation Format

Share Document