scholarly journals Greedy sensor selection based on QR factorization

Author(s):  
Yoon Hak Kim

AbstractWe address the problem of selecting a given number of sensor nodes in wireless sensor networks where noise-corrupted linear measurements are collected at the selected nodes to estimate the unknown parameter. Noting that this problem is combinatorial in nature and selection of sensor nodes from a large number of nodes would require unfeasible computational cost, we propose a greedy sensor selection method that seeks to choose one node at each iteration until the desired number of sensor nodes are selected. We first apply the QR factorization to make the mean squared error (MSE) of estimation a simplified metric which is iteratively minimized. We present a simple criterion which enables selection of the next sensor node minimizing the MSE at iterations. We discuss that a near-optimality of the proposed method is guaranteed by using the approximate supermodularity and also make a complexity analysis for the proposed algorithm in comparison with different greedy selection methods, showing a reasonable complexity of the proposed method. We finally run extensive experiments to investigate the estimation performance of the different selection methods in various situations and demonstrate that the proposed algorithm provides a good estimation accuracy with a competitive complexity when compared with the other novel greedy methods.

Energies ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 696
Author(s):  
Eun Ji Choi ◽  
Jin Woo Moon ◽  
Ji-hoon Han ◽  
Yongseok Yoo

The type of occupant activities is a significantly important factor to determine indoor thermal comfort; thus, an accurate method to estimate occupant activity needs to be developed. The purpose of this study was to develop a deep neural network (DNN) model for estimating the joint location of diverse human activities, which will be used to provide a comfortable thermal environment. The DNN model was trained with images to estimate 14 joints of a person performing 10 common indoor activities. The DNN contained numerous shortcut connections for efficient training and had two stages of sequential and parallel layers for accurate joint localization. Estimation accuracy was quantified using the mean squared error (MSE) for the estimated joints and the percentage of correct parts (PCP) for the body parts. The results show that the joint MSEs for the head and neck were lowest, and the PCP was highest for the torso. The PCP for individual activities ranged from 0.71 to 0.92, while typing and standing in a relaxed manner were the activities with the highest PCP. Estimation accuracy was higher for relatively still activities and lower for activities involving wide-ranging arm or leg motion. This study thus highlights the potential for the accurate estimation of occupant indoor activities by proposing a novel DNN model. This approach holds significant promise for finding the actual type of occupant activities and for use in target indoor applications related to thermal comfort in buildings.


2018 ◽  
Vol 10 (8) ◽  
pp. 1285 ◽  
Author(s):  
Reza Attarzadeh ◽  
Jalal Amini ◽  
Claudia Notarnicola ◽  
Felix Greifeneder

This paper presents an approach for retrieval of soil moisture content (SMC) by coupling single polarization C-band synthetic aperture radar (SAR) and optical data at the plot scale in vegetated areas. The study was carried out at five different sites with dominant vegetation cover located in Kenya. In the initial stage of the process, different features are extracted from single polarization mode (VV polarization) SAR and optical data. Subsequently, proper selection of the relevant features is conducted on the extracted features. An advanced state-of-the-art machine learning regression approach, the support vector regression (SVR) technique, is used to retrieve soil moisture. This paper takes a new look at soil moisture retrieval in vegetated areas considering the needs of practical applications. In this context, we tried to work at the object level instead of the pixel level. Accordingly, a group of pixels (an image object) represents the reality of the land cover at the plot scale. Three approaches, a pixel-based approach, an object-based approach, and a combination of pixel- and object-based approaches, were used to estimate soil moisture. The results show that the combined approach outperforms the other approaches in terms of estimation accuracy (4.94% and 0.89 compared to 6.41% and 0.62 in terms of root mean square error (RMSE) and R2), flexibility on retrieving the level of soil moisture, and better quality of visual representation of the SMC map.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ali Jaber Naeemah ◽  
Kuan Yew Wong

PurposeThe purpose of this paper is (1) to review, analyze and assess the existing literature on lean tools selection studies published from 2005 to 2021; (2) to identify the limitations faced by previous studies; and (3) to suggest future works that are necessary to facilitate the selection of lean tools.Design/methodology/approachA systematic approach was used in order to identify, collect and select the articles. Several keywords related to the selection of lean tools were used to collect articles from different Scopus indexed journals. Next, the study systematically reviewed and analyzed the selected papers to identify the lean tools' selection method and discussed its features and limitations.FindingsAn analysis of the results showed that previous studies have adopted two types of methods for selecting lean tools. First, there are various traditional methods being used. Second, multi-criteria decision-making (MCDM) methods were commonly used in previous studies, such as the multi-objective decision-making method (MODM), single multi-attribute decision-making (MADM) methods and hybrid (MCDM). Moreover, the study revealed that the lean tools' selection methods in previous studies were based on evaluating the relationship between either lean tools and performance metrics or lean tools and waste, or both.Research limitations/implicationsIn terms of its theoretical value, the study is considered as an extension of the previous researches performed on this topic by determining and analyzing the features of the most selection methods of lean tools. Unlike previous review papers, this review had considered discussing and analyzing the characteristics and limitations of these methods. Section 2.2 of this paper reviewed some of the categories of MCDM methods as well as some of the traditional methods used in the selected previous studies. Section 2.1 of this paper explained the concept of lean management and its application benefits. Further, only three sectors were covered by the previous studies in this review paper. This study also provided recommendations for future research. Therefore, it provided researchers with a good conception of how to conduct the studies on lean tools selection. Besides, knowing the methods used in previous studies can help researchers develop new methods to select the best set of lean tools. That is, this study provided and advanced the existing knowledge base for researchers concerning lean tools selection, especially there is limited availability of review papers on this topic. Moreover, the study showed researchers the importance of the relationship between lean tools and indicators or/and performance indicators to determine the appropriate set of lean tools so that the results of future studies will be more realistic and acceptable.Practical implicationsPractically, manufacturers face a significant challenge when selecting proper lean tools. This study may enhance managers, manufacturers and company's knowledge to identify most of the methods used to choose the best set of lean tools and what are the advantages, disadvantages and limitations of these methods as well as the latest studies that have been adopted in this topic. That means this study can direct companies to prioritize the application of lean tools depending on either the manufacturing performance metrics or/and manufacturing wastes so that they avoid incorrect application of lean tools, which will add more non-value added activities to operations. Therefore companies can decrease the time and cost losses and enhancing the quality and efficiency of the performance. Correctly implementing the best set of lean tools in companies will lead in general to correctly applying lean management in corporations. Therefore, these lean tools can boost the economic aspect of companies and society through reducing waste, improving performance indicators, preserving time and cost, achieving quality, efficiency, competitiveness, boosting employee income and improving the gross domestic product. The correct lean tool selection reduces customer complaints and employee stress and improves work conditions, health, safety and labor wellbeing. Besides, the correct lean tools selection improves materials usage, energy usage, water usage and decreases liquid wastes, solid wastes and air emissions. As a result, the right selection of lean tools will have positive effects on both the environment and society. The study may also encourage manufacturers and researchers to adopt studies on lean tools selection in small- and medium-sized companies because the study referred to the importance and participation of these kinds of companies in a large proportion of the economy of developing countries. Further, the study may encourage some countries that have not previously adopted this type of study, academically and industrially to conduct lean tools selection studies.Social implicationsAs mentioned previously, the correct lean tool selection reduces customer complaints and employee stress and improves work conditions, health, safety and labor wellbeing. The proper lean tools selection improves materials usage, energy usage, water usage and decreases liquid wastes, solid wastes and air emissions. As a result, the right choice of lean tools will positively affect both the environment and society.Originality/valueThe study expanded the efforts of previous studies concerning lean management features. It provided an accurate review of most lean tools selection studies published from 2005 to 2021 and was not limited to the manufacturing sector. It further identified and briefly described the selection methods concerning lean tools adopted in each paper.


Author(s):  
Zhihui Yang ◽  
Xiangyu Tang ◽  
Lijuan Zhang ◽  
Zhiling Yang

Human pose estimate can be used in action recognition, video surveillance and other fields, which has received a lot of attentions. Since the flexibility of human joints and environmental factors greatly influence pose estimation accuracy, related research is confronted with many challenges. In this paper, we incorporate the pyramid convolution and attention mechanism into the residual block, and introduce a hybrid structure model which synthetically applies the local and global information of the image for the analysis of keypoints detection. In addition, our improved structure model adopts grouped convolution, and the attention module used is lightweight, which will reduce the computational cost of the network. Simulation experiments based on the MS COCO human body keypoints detection data set show that, compared with the Simple Baseline model, our model is similar in parameters and GFLOPs (giga floating-point operations per second), but the performance is better on the detection of accuracy under the multi-person scenes.


2018 ◽  
Vol 02 (03) ◽  
pp. 169-183
Author(s):  
Sharath Kumar G G ◽  
Chinmay Nagesh

AbstractAppropriate patient selection and expedient recanalization are the mainstay of modern management of acute ischemic stroke (AIS). Only a minority of patients (7–15%) of patients are eligible for endovascular therapy. Patient selection may be time based or perfusion based. Central to both paradigms is the selection of a patient with a small core, a significant penumbra that can be differentiated from areas of oligemia. A brief review of patient selection methods is presented. Endovascular thrombectomy techniques using stentrievers or aspiration catheters have now become the treatment of choice for AIS with large vessel occlusion. A range of devices, each with its own advantages and disadvantages, are available in the market for the neurointerventionist to choose. Techniques vary between devices and between operators, but standardization and protocolization are important within each center. Complications must be anticipated to be avoided. Once reperfusion is achieved, outcomes must be safeguarded with competent postprocedure management to prevent secondary brain injury. These aspects are reviewed in this article.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2328 ◽  
Author(s):  
Juan Feng ◽  
Xiaozhu Shi

In target tracking wireless sensor networks, choosing a part of sensor nodes to execute tracking tasks and letting the other nodes sleep to save energy are efficient node management strategies. However, at present more and more sensor nodes carry many different types of sensed modules, and the existing researches on node selection are mainly focused on sensor nodes with a single sensed module. Few works involved the management and selection of the sensed modules for sensor nodes which have several multi-mode sensed modules. This work proposes an efficient node and sensed module management strategy, called ENSMM, for multisensory WSNs (wireless sensor networks). ENSMM considers not only node selection, but also the selection of the sensed modules for each node, and then the power management of sensor nodes is performed according to the selection results. Moreover, a joint weighted information utility measurement is proposed to estimate the information utility of the multiple sensed modules in the different nodes. Through extensive and realistic experiments, the results show that, ENSMM outperforms the state-of-the-art approaches by decreasing the energy consumption and prolonging the network lifetime. Meanwhile, it reduces the computational complexity with guaranteeing the tracking accuracy.


2022 ◽  
Author(s):  
Chen Wei ◽  
Kui Xu ◽  
Zhexian Shen ◽  
Xiaochen Xia ◽  
Wei Xie ◽  
...  

Abstract In this paper, we investigate the uplink transmission for user-centric cell-free massive multiple-input multiple-output (MIMO) systems. The largest-large-scale-fading-based access point (AP) selection method is adopted to achieve a user-centric operation. Under this user-centric framework, we propose a novel inter-cluster interference-based (IC-IB) pilot assignment scheme to alleviate pilot contamination. Considering the local characteristics of channel estimates and statistics, we propose a location-aided distributed uplink combining scheme based on a novel proposed metric representing inter-user interference to balance the relationship among the spectral efficiency (SE), user equipment (UE) fairness and complexity, in which the normalized local partial minimum mean-squared error (LP-MMSE) combining is adopted for some APs, while the normalized maximum ratio (MR) combining is adopted for the remaining APs. A new closed-form SE expression using the normalized MR combining is derived and a novel metric to indicate the UE fairness is also proposed. Moreover, the max-min fairness (MMF) power control algorithm is utilized to further ensure uniformly good service to the UEs. Simulation results demonstrate that the channel estimation accuracy of our proposed IC-IB pilot assignment scheme outperforms that of the conventional pilot assignment schemes. Furthermore, although the proposed location-aided uplink combining scheme is not always the best in terms of the per-UE SE, it can provide the more fairness among UEs and can achieve a good trade-off between the average SE and computational complexity.


2020 ◽  
Vol 11 ◽  
Author(s):  
Shuhei Kimura ◽  
Ryo Fukutomi ◽  
Masato Tokuhisa ◽  
Mariko Okada

Several researchers have focused on random-forest-based inference methods because of their excellent performance. Some of these inference methods also have a useful ability to analyze both time-series and static gene expression data. However, they are only of use in ranking all of the candidate regulations by assigning them confidence values. None have been capable of detecting the regulations that actually affect a gene of interest. In this study, we propose a method to remove unpromising candidate regulations by combining the random-forest-based inference method with a series of feature selection methods. In addition to detecting unpromising regulations, our proposed method uses outputs from the feature selection methods to adjust the confidence values of all of the candidate regulations that have been computed by the random-forest-based inference method. Numerical experiments showed that the combined application with the feature selection methods improved the performance of the random-forest-based inference method on 99 of the 100 trials performed on the artificial problems. However, the improvement tends to be small, since our combined method succeeded in removing only 19% of the candidate regulations at most. The combined application with the feature selection methods moreover makes the computational cost higher. While a bigger improvement at a lower computational cost would be ideal, we see no impediments to our investigation, given that our aim is to extract as much useful information as possible from a limited amount of gene expression data.


2019 ◽  
Vol 21 (3) ◽  
pp. 332-339 ◽  
Author(s):  
Jong-Won Chung ◽  
Beom Joon Kim ◽  
Han-Gil Jeong ◽  
Woo-Keun Seo ◽  
Gyeong-Moon Kim ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7716
Author(s):  
Krzysztof K. Cwalina ◽  
Piotr Rajchowski ◽  
Alicja Olejniczak ◽  
Olga Błaszkiewicz ◽  
Robert Burczyk

Following the continuous development of the information technology, the concept of dense urban networks has evolved as well. The powerful tools, like machine learning, break new ground in smart network and interface design. In this paper the concept of using deep learning for estimating the radio channel parameters of the LTE (Long Term Evolution) radio interface is presented. It was proved that the deep learning approach provides a significant gain (almost 40%) with 10.7% compared to the linear model with the lowest RMSE (Root Mean Squared Error) 17.01%. The solution can be adopted as a part of the data allocation algorithm implemented in the telemetry devices equipped with the 4G radio interface, or, after the adjustment, the NB-IoT (Narrowband Internet of Things), to maximize the reliability of the services in harsh indoor or urban environments. Presented results also prove the existence of the inverse proportional dependence between the number of hidden layers and the number of historical samples in terms of the obtained RMSE. The increase of the historical data memory allows using models with fewer hidden layers while maintaining a comparable RMSE value for each scenario, which reduces the total computational cost.


Sign in / Sign up

Export Citation Format

Share Document