FACE

2021 ◽  
Vol 15 (1) ◽  
pp. 72-84
Author(s):  
Jiayi Wang ◽  
Chengliang Chai ◽  
Jiabin Liu ◽  
Guoliang Li

Cardinality estimation is one of the most important problems in query optimization. Recently, machine learning based techniques have been proposed to effectively estimate cardinality, which can be broadly classified into query-driven and data-driven approaches. Query-driven approaches learn a regression model from a query to its cardinality; while data-driven approaches learn a distribution of tuples, select some samples that satisfy a SQL query, and use the data distributions of these selected tuples to estimate the cardinality of the SQL query. As query-driven methods rely on training queries, the estimation quality is not reliable when there are no high-quality training queries; while data-driven methods have no such limitation and have high adaptivity. In this work, we focus on data-driven methods. A good data-driven model should achieve three optimization goals. First, the model needs to capture data dependencies between columns and support large domain sizes (achieving high accuracy). Second, the model should achieve high inference efficiency, because many data samples are needed to estimate the cardinality (achieving low inference latency). Third, the model should not be too large (achieving a small model size). However, existing data-driven methods cannot simultaneously optimize the three goals. To address the limitations, we propose a novel cardinality estimator FACE, which leverages the Normalizing Flow based model to learn a continuous joint distribution for relational data. FACE can transform a complex distribution over continuous random variables into a simple distribution (e.g., multivariate normal distribution), and use the probability density to estimate the cardinality. First, we design a dequantization method to make data more "continuous". Second, we propose encoding and indexing techniques to handle Like predicates for string data. Third, we propose a Monte Carlo method to efficiently estimate the cardinality. Experimental results show that our method significantly outperforms existing approaches in terms of estimation accuracy while keeping similar latency and model size.

2019 ◽  
Vol 61 (2) ◽  
pp. 253-259
Author(s):  
Iroshani Kodikara ◽  
Iroshini Abeysekara ◽  
Dhanusha Gamage ◽  
Isurani Ilayperuma

Background Volume estimation of organs using two-dimensional (2D) ultrasonography is frequently warranted. Considering the influence of estimated volume on patient management, maintenance of its high accuracy is empirical. However, data are scarce regarding the accuracy of estimated volume of non-globular shaped objects of different volumes. Purpose To evaluate the volume estimation accuracy of different shaped and sized objects using high-end 2D ultrasound scanners. Material and Methods Globular (n=5); non-globular elongated (n=5), and non-globular near-spherical shaped (n=4) hollow plastic objects were scanned to estimate the volumes; actual volumes were compared with estimated volumes. T-test and one-way ANOVA were used to compare means; P<0.05 was considered significant. Results The actual volumes of the objects were in the range of 10–445 mL; estimated volumes ranged from 6.4–425 mL ( P=0.067). The estimated volume was lower than the actual volume; such volume underestimation was marked for non-globular elongated objects. Regardless of the scanner, the highest volume estimation error was for non-globular elongated objects (<40%) followed by non-globular near-spherical shaped objects (<23.88%); the lowest was for globular objects (<3.6%). Irrespective of the shape or the volume of the object, volume estimation difference among the scanners was not significant: globular (F=0.430, P=0.66); non-globular elongated (F=3.69, P=0.064); and non-globular near-spherical (F=4.00, P=0.06). A good inter-rater agreement (R=0.99, P<0.001) and a good correlation between actual versus estimated volumes (R=0.98, P<0.001) were noted. Conclusion The 2D ultrasonography can be recommended for volume estimation purposes of different shaped and different sized objects, regardless the type of the high-end scanner used.


Solar Energy ◽  
2021 ◽  
Vol 218 ◽  
pp. 48-56
Author(s):  
Max Pargmann ◽  
Daniel Maldonado Quinto ◽  
Peter Schwarzbözl ◽  
Robert Pitz-Paal

Author(s):  
Zezhou Zhang ◽  
Qingze Zou

Abstract In this paper, an optimal data-driven modeling-free differential-inversion-based iterative control (OMFDIIC) method is proposed for both high performance and robustness in the presence of random disturbances. Achieving high accuracy and fast convergence is challenging as the system dynamics behaviors vary due to the external uncertainties and the system bandwidth is limited. The aim of the proposed method is to compensate for the dynamics effect without modeling process and achieve both high accuracy and robust convergence, by extending the existed modeling-free differential-inversion-based iterative control (MFDIIC) method through a frequency- and iteration-dependent gain. The convergence of the OMFDIIC method is analyzed with random noise/disturbances considered. The developed method is applied to a wafer stage, and shows a significant improvement in the performance.


2012 ◽  
Vol 466-467 ◽  
pp. 1329-1333
Author(s):  
Jing Mu ◽  
Chang Yuan Wang

We present the new filters named iterated cubature Kalman filter (ICKF). The ICKF is implemented easily and involves the iterate process for fully exploiting the latest measurement in the measurement update so as to achieve the high accuracy of state estimation We apply the ICKF to state estimation for maneuver reentry vehicle. Simulation results indicate ICKF outperforms over the unscented Kalman filter and square root cubature Kalman filter in state estimation accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6961
Author(s):  
Xuan Liu ◽  
Yong Li ◽  
Feng Shuang ◽  
Fang Gao ◽  
Xiang Zhou ◽  
...  

In power inspection tasks, the insulator and spacer are important inspection objects. UAV (unmanned aerial vehicle) power inspection is becoming more and more popular. However, due to the limited computing resources carried by a UAV, a lighter model with small model size, high detection accuracy, and fast detection speed is needed to achieve online detection. In order to realize the online detection of power inspection objects, we propose an improved SSD (single shot multibox detector) insulator and spacer detection algorithm using the power inspection images collected by a UAV. In the proposed algorithm, the lightweight network MnasNet is used as the feature extraction network to generate feature maps. Then, two multiscale feature fusion methods are used to fuse multiple feature maps. Lastly, a power inspection object dataset containing insulators and spacers based on aerial images is built, and the performance of the proposed algorithm is tested on real aerial images and videos. Experimental results show that the proposed algorithm can efficiently detect insulators and spacers. Compared with existing algorithms, the proposed algorithm has the advantages of small model size and fast detection speed. The detection accuracy can achieve 93.8%. The detection time of a single image on TX2 (NVIDIA Jetson TX2) is 154 ms and the capture rate on TX2 is 8.27 fps, which allows realizing online detection.


2021 ◽  
Vol 9 ◽  
Author(s):  
Sota Murakami ◽  
Tsuyoshi Ichimura ◽  
Kohei Fujita ◽  
Takane Hori ◽  
Yusaku Ohta

Estimating the coseismic slip distribution and interseismic slip-deficit distribution play an important role in understanding the mechanism of massive earthquakes and predicting the resulting damage. It is useful to observe the crustal deformation not only in the land area, but also directly above the seismogenic zone. Therefore, improvements in terms of measurement precision and increase in the number of observation points have been proposed in various forms of seafloor observation. However, there is lack of research on the quantitative evaluation of the estimation accuracy in cases where new crustal deformation observation points are available or when the precision of the observation methods have been improved. On the other hand, the crustal structure models are improving and finite element analysis using these highly detailed crustal structure models is becoming possible. As such, there is the real possibility of performing an inverted slip estimation with high accuracy via numerical experiments. In view of this, in this study, we proposed a method for quantitatively evaluating the improvement in the estimation accuracy of the coseismic slip distribution and the interseismic slip-deficit distribution in cases where new crustal deformation observation points are available or where the precision of the observation methods have been improved. As a demonstration, a quantitative evaluation was performed using an actual crustal structure model and observation point arrangement. For the target area, we selected the Kuril Trench off Tokachi and Nemuro, where M9-class earthquakes have been known to occur in the past and where the next imminent earthquake is anticipated. To appropriately handle the effects of the topography and plate boundary geometry, a highly detailed three-dimensional finite element model was constructed and Green’s functions of crustal deformation were calculated with high accuracy. By performing many inversions via optimization using Green’s functions, we statistically evaluated the effect of increase in the number of observation points of the seafloor crustal deformation measurement and the influence of measurement error, taking into consideration the diversity of measurement errors. As a result, it was demonstrated that the observation of seafloor crustal deformation near the trench axis plays an extremely important role in the estimation performance.


Author(s):  
Xuejing Lei ◽  
Ganning Zhao ◽  
Kaitai Zhang ◽  
C.-C. Jay Kuo

An explainable, efficient, and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high-quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.


Author(s):  
Frederik Boe Hüttel ◽  
Line Katrine Harder Clemmensen

Consistent and accurate estimation of stellar parameters is of great importance for information retrieval in astrophysical research. The parameters span a wide range from effective temperature to rotational velocity. We propose to estimate the stellar parameters directly from spectral signals coming from the HARPS-N spectrograph pipeline before any spectrum-processing steps are applied to extract the 1D spectrum. We propose an attention-based model to estimate the stellar parameters, which estimate both mean and uncertainty of the stellar parameters through estimation of the parameters of a Gaussian distribution. The estimated distributions create a basis to generate data-driven Gaussian confidence intervals for the estimated stellar parameters. We show that residual networks and attention-based models can estimate the stellar parameters with high accuracy for low Signal-to-noise ratio (SNR) compared to previous methods. With an observation of the Sun from the HARPS-N spectrograph, we show that the models can estimate stellar parameters from real observational data.


2019 ◽  
Vol 888 ◽  
pp. 66-71
Author(s):  
Y. Jiang ◽  
S. Hashimoto ◽  
Y. Yamakoshi ◽  
Takashi Otomo

Our study aims to develop a small handy low-cost viscometer for nursing care food management. There are many methods to measure the viscosity of a fluid. In this research, the rotational viscometer employing the observer-based method instead of the actual torque transducer has been developed. The digitally controlled motor has been used to estimate the viscosity in real time with high accuracy. In order to verify the effectiveness of the developed viscosity estimation method, a prototype viscometer has been constructed and tested the viscosity estimation accuracy with standard liquids. As a result, the viscosity accuracy is equivalent to that of the conventional torque transducer-equipped viscometer.


Author(s):  
Nima Kargah-Ostadi ◽  
Ammar Waqar ◽  
Adil Hanif

Roadway asset inventory data are essential in making data-driven asset management decisions. Despite significant advances in automated data processing, the current state of the practice is semi-automated. This paper demonstrates integration of the state-of-the-art artificial intelligence technologies within a practical framework for automated real-time identification of traffic signs from roadway images. The framework deploys one of the very latest machine learning algorithms on a cutting-edge plug-and-play device for superior effectiveness, efficiency, and reliability. The proposed platform provides an offline system onboard the survey vehicle, that runs a lightweight and speedy deep neural network on each collected roadway image and identifies traffic signs in real-time. Integration of these advanced technologies minimizes the need for subjective and time-consuming human interventions, thereby enhancing the repeatability and cost-effectiveness of the asset inventory process. The proposed framework is demonstrated using a real-world image dataset. Appropriate pre-processing techniques were employed to alleviate limitations in the training dataset. A deep learning algorithm was trained for detection, classification, and localization of traffic signs from roadway imagery. The success metrics based on this demonstration indicate that the algorithm was effective in identifying traffic signs with high accuracy on a test dataset that was not used for model development. Additionally, the algorithm exhibited this high accuracy consistently among the different considered sign categories. Moreover, the algorithm was repeatable among multiple runs and reproducible across different locations. Above all, the real-time processing capability of the proposed solution reduces the time between data collection and delivery, which enhances the data-driven decision-making process.


Sign in / Sign up

Export Citation Format

Share Document