range check

Keyword(s):  

2015 ◽  
Vol 16 (6) ◽  
pp. 472-483 ◽  
Author(s):  
El Hassane Bentefour ◽  
Stefan Both ◽  
Shikui Tang ◽  
Hsiao-Ming Lu


2015 ◽  
Vol 42 (4) ◽  
pp. 1936-1947 ◽  
Author(s):  
El H. Bentefour ◽  
Shikui Tang ◽  
Ethan W. Cascio ◽  
Mauro Testa ◽  
Deepak Samuel ◽  
...  


2019 ◽  
Author(s):  
Huiqun Wang ◽  
Amir Hossein Souri ◽  
Gonzalo Gonzalez Abad ◽  
Xiong Liu ◽  
Kelly Chance

Abstract. Total Column Water Vapor (TCWV) is important for the weather and climate. TCWV is derived from the OMI visible spectra using the Version 4 retrieval algorithm developed at the Smithsonian Astrophysical Observatory. The algorithm uses a retrieval window between 432.0 and 466.5 nm and includes various updates. The retrieval window optimization results from the trade-offs among competing factors. The OMI product is characterized by comparing against commonly used reference datasets – GPS network data over land and SSMIS data over the oceans. We examine how cloud fraction and cloud top pressure affect the comparisons. The results lead us to recommend filtering OMI data with cloud fraction < 5–15 % and cloud top pressure > 750 mb or stricter criteria, in addition to the main data quality, fitting RMS and TCWV range check. The mean of OMI-GPS is 0.85 mm with a standard deviation (σ) of 5.2 mm. Smaller differences between OMI and GPS (0.2 mm) occur when TCWV is within 10–20 mm. The bias is much smaller than the previous version. The mean of OMI-SSMIS is 1.2–1.9 mm (σ = 6.5–6.8 mm), with better agreement for January than for July. Smaller differences between OMI and SSMIS (0.3–1.6 mm) occur when TCWV is within 10–30 mm. However, the relative difference between OMI and the reference datasets is large when TCWV is less than 10 mm. As test applications of the Version 4 OMI TCWV over a range of spatial and temporal scales, we find prominent signals of the patterns associated with El Niño and La Niña, the high humidity associated with a corn sweat event and the strong moisture band of an Atmospheric River (AR). A data assimilation experiment demonstrates that the OMI data can help improve WRF’s skill at simulating the structure and intensity of the AR and the precipitation at the AR landfall.



2015 ◽  
Vol 12 (4) ◽  
pp. 4157-4190 ◽  
Author(s):  
J.-E. Lee ◽  
G. W. Lee ◽  
M. Earle ◽  
R. Nitu

Abstract. A methodology for quantifying the accuracy of snow depth measurement are demonstrated in this study by using the equation of error propagation for the same type sensors and by compariong autimatic measurement with manual observation. Snow depth was measured at the Centre for Atmospheric Research Experiments (CARE) site of the Environment Canada (EC) during the 2013–2014 winter experiment. The snow depth measurement system at the CARE site was comprised of three bases. Three ultrasonic and one laser snow depth sensors and twelve snow stakes were placed on each base. Data from snow depth sensors are quality-controlled by range check and step test to eliminate erroneous data such as outliers and discontinuities. In comparison with manual observations, bias errors were calculated to show the spatial distribution of snow depth by considering snow depth measured from four snow stakes located on the easternmost side of the site as reference. The bias error of snow stakes on the west side of the site was largest. The uncertainty of all pairs of stakes and the average uncertainty for each base were 1.81 and 1.52 cm, respectively. The bias error and normalized bias removed root mean square error (NBRRMSE) for each snow depth sensor were calculated to quantify the systematic error and random error in comparison of snow depth sensors with manual observations that share the same snow depth target. The snow depth sensors on base 12A (11A) measured snow depth larger (less) than manual observation up to 10.8 cm (5.21 cm), and the NBRRMSEs ranged from 5.10 to 16.5%. Finally, the instrumental uncertainties of each snow depth sensor were calculated by comparing three sensors of the same type installed at the different bases. The instrumental uncertainties ranged from 0.62 to 3.08 cm.



Author(s):  
Martin H. Weik
Keyword(s):  


2022 ◽  
pp. 54-69
Author(s):  
Abhinav Chaturvedi ◽  
Mukesh Chaturvedi

The present times are disrupting times for every kind of business and every aspect of a business. It is not about contactlessness; it is about seamlessness. The auto manufacturers have already started “Amazoning” dealerships. Brands are developing customer-specific platforms like jaguar.rockar.com, where one can explore the range, check the price, select dealer, search inventory, and schedule test drives. The brand Cadillac creates virtual reality experiences in Google Search, wherein a car appears in a living room through a phone call. One can see how it looks, walk around it, open the doors, and get a sense of the interior. This chapter explores the transformation of CRM through artificial intelligence.



2020 ◽  
Vol 9 (2) ◽  
pp. 113 ◽  
Author(s):  
Chunyang Liu ◽  
Jiping Liu ◽  
Shenghua Xu ◽  
Jian Wang ◽  
Chao Liu ◽  
...  

With the growing popularity of location-based social media applications, point-of-interest (POI) recommendation has become important in recent years. Several techniques, especially the collaborative filtering (CF), Markov chain (MC), and recurrent neural network (RNN) based methods, have been recently proposed for the POI recommendation service. However, CF-based methods and MC-based methods are ineffective to represent complicated interaction relations in the historical check-in sequences. Although recurrent neural networks (RNNs) and its variants have been successfully employed in POI recommendation, they depend on a hidden state of the entire past that cannot fully utilize parallel computation within a check-in sequence. To address these above limitations, we propose a spatiotemporal dilated convolutional generative network (ST-DCGN) for POI recommendation in this study. Firstly, inspired by the Google DeepMind’ WaveNet model, we introduce a simple but very effective dilated convolutional generative network as a solution to POI recommendation, which can efficiently model the user’s complicated short- and long-range check-in sequence by using a stack of dilated causal convolution layers and residual block structure. Then, we propose to acquire user’s spatial preference by modeling continuous geographical distances, and to capture user’s temporal preference by considering two types of time periodic patterns (i.e., hours in a day and days in a week). Moreover, we conducted an extensive performance evaluation using two large-scale real-world datasets, namely Foursquare and Instagram. Experimental results show that the proposed ST-DCGN model is well-suited for POI recommendation problems and can effectively learn dependencies in and between the check-in sequences. The proposed model attains state-of-the-art accuracy with less training time in the POI recommendation task.



2016 ◽  
Vol 33 (5) ◽  
pp. 953-976 ◽  
Author(s):  
Anne Ru Cheng ◽  
Tim Hau Lee ◽  
Hsin I. Ku ◽  
Yi Wen Chen

AbstractThis paper introduces a quality control (QC) program for the real-time hourly land surface temperature observation developed by the Central Weather Bureau in Taiwan. There are three strategies involved. The first strategy is a range check scheme that inspects whether the observation falls inside the climatological limits of the station to screen out the obvious outliers. Limits are adjusted according to the station’s elevation. The second strategy is a spatial check scheme that scrutinizes whether the observation falls inside the derived confidence interval, according to the data from the reference stations and the correlations among the stations, to judge the reliability of the data. The scheme is specialized, as it employs the theorems of unbiased and minimum error estimators to determine the weights. The performance evaluation results show that the new method is in theory superior to the spatial regression test (You et al.). The third strategy is a temporal check scheme that examines whether the temperature difference of two successive observations exceeds the temperature variation threshold for judging the rationality of the data. Different thresholds are applied for the data observed in different times under different rainfall conditions. Procedurally, the observation must pass the range check first and then go through the spatial or the temporal check. The temporal check is applied only when the spatial check is unavailable. Post-examinations of the data from 2014 show that the QC program is able to filter out most of the significant errors.



2013 ◽  
Vol 19 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Yen-Jen Chang ◽  
Hsiang-Yu Lu
Keyword(s):  


2004 ◽  
Vol 43 (11) ◽  
pp. 1722-1735 ◽  
Author(s):  
Daniel Y. Graybeal ◽  
Arthur T. DeGaetano ◽  
Keith L. Eggleston

Abstract Historical hourly surface synoptic (airways) meteorological reports from around the United States have been digitized as part of the NOAA Climate Database Modernization Program. An important component is improvement of quality assurance procedures for hourly meteorological data. This paper presents the development and testing of two components of a new complex framework, as well as their application toward construction, for the first time, of a 75-yr time series of apparent temperature. A pilot study indicated that a majority of flags thrown from an existing algorithm represent single-hour blips, rather than steps, and that frontal passages were being flagged incorrectly. Therefore, a model focused on flagging blips is developed; two blip-magnitude measures are compared that define a blip as a departure from temporally neighboring observations. Switches of dewpoint with dewpoint depression have also been noted among observer/digitizer errors, and so an additional check was developed to screen for these cases. This check is based on a relationship between dewpoint depression and diurnal temperature range. Tests using artificial replication of common errors indicate that the new blip model outperforms traditional step models considerably, and the new model flags an order-of-magnitude fewer frontal passages. Operational use of this check suggests type-I and type-II error rates are similar in magnitude and are approximately 5%. More than two-fifths of known dewpoint depression switch errors are caught. However, poor performance with systematic errors suggests that using the depression-range check at a coarser temporal scale than hour to hour may be more fruitful.



Sign in / Sign up

Export Citation Format

Share Document