vertical data
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 28)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 9 (7) ◽  
pp. 760
Author(s):  
Shengyi Jiao ◽  
Shengmao Huang ◽  
Jianfeng Wang ◽  
Xianqing Lv

The setting of initial values is one of the key problems in ocean numerical prediction, with the accuracy of sea water temperature (SWT) simulation and prediction greatly affected by the initial field quality. In this paper, we describe the development of an adjoint assimilation model of temperature transport used to invert the initial temperature field by assimilating the observed data of sea surface temperature (SST) and vertical temperature. Two ideal experiments were conducted to verify the feasibility and validity of this method. By assimilating the “observed data”, the mean absolute error (MAE) between the simulated temperature data and the “observed data” decreased from 1.74 °C and 1.87 °C to 0.13 °C and 0.14 °C, respectively. The spatial distribution of SST difference and the comparison of vertical data also indicate that the regional error of vertical data assimilation is smaller. In the practical experiment, the monthly average temperature field provided by World Ocean Atlas 2018 was selected as background filed and optimized by assimilating the SST data and Argo vertical temperature observation data, to invert the temperature field at 0 a.m. on 1 December 2014 in the South China Sea. Through data assimilation, MAE was reduced from 1.29 °C to 0.65 °C. In terms of vertical observations data comparison and SST spatial distribution, the temperature field obtained by inversion is in good agreement with SST and Argo observations.


Author(s):  
Thirumaran. S, Et. al.

One of the most important areas that are constantly being focused recently is the big data and mining frequent patterns from them is an interesting vertical which is perpetually being evolved and gained plethora of attention among the research fraternities. Generally, the data is mined with the aid of Apriori based algorithms, tree based algorithm and hash based algorithm but most of these existing algorithms suffer many snags and limitations. This paper proposes a new method that overrides and overcomes the most common problems related to speed, memory consumption and search space. The algorithm named Dual Mine employs binary vector representation and vertical data representations in the map reduce and then discover the most patterns from the large data sets. The Dual mine algorithm is then compared with some of the existing algorithms to determine the efficiency of the proposed algorithm and from the experimental results it is quite evident that the proposed algorithm “Dual Mine” outscored the other algorithms by a big magnitude with respect to speed and memory.


2021 ◽  
Vol 125 ◽  
pp. 103389
Author(s):  
Thomas Küfner ◽  
Stefan Schönig ◽  
Richard Jasinski ◽  
Andreas Ermer

2021 ◽  
Vol 56 (1) ◽  
pp. 113-131
Author(s):  
Yanqiang Wu ◽  
Zaisen Jiang ◽  
Bofeng Guo ◽  
Guohua Yang ◽  
Wanju Bo ◽  
...  

2021 ◽  
Vol 3 (1) ◽  
pp. 63-72
Author(s):  
I. G. Tsmots ◽  
◽  
Yu. A. Lukashchuk ◽  
I. V. Ihnatyev ◽  
I. Ya. Kazymyra ◽  
...  

It is shown that for the pro­ces­sing of in­tensi­ve da­ta flows in in­dustry (ma­na­ge­ment of techno­lo­gi­cal pro­ces­ses and complex ob­jects), energy (op­ti­mi­za­ti­on of lo­ad in po­wer grids), mi­li­tary af­fa­irs (techni­cal vi­si­on, mo­bi­le ro­bot traf­fic control, cryptog­raphic da­ta pro­tec­ti­on), transport (traf­fic ma­na­ge­ment and en­gi­ne), me­di­ci­ne (di­se­ase di­ag­no­sis) and instru­men­ta­ti­on (pat­tern re­cog­ni­ti­on and control op­ti­mi­za­ti­on) the re­al-ti­me hardwa­re neu­ral net­works with high ef­fi­ci­ency of eq­uipment use sho­uld be appli­ed. The ope­ra­ti­onal ba­sis of neu­ral net­works is for­med and the fol­lo­wing ope­ra­ti­ons are cho­sen for hardwa­re imple­men­ta­ti­on: the se­arch of the ma­xi­mum and mi­ni­mum val­ues, cal­cu­la­ti­on of the sum of squa­res of dif­fe­ren­ces and sca­lar pro­duct. Req­ui­re­ments for hardwa­re com­po­nents of neu­ral net­works with co­or­di­na­ted ver­ti­cal-pa­ral­lel da­ta pro­ces­sing are de­ter­mi­ned, the ma­in ones of which are: high ef­fi­ci­ency of eq­uipment use, adap­ta­ti­on to the req­ui­re­ments of spe­ci­fic appli­ca­ti­ons, co­or­di­na­ti­on of in­put da­ta in­tensity with the com­pu­ta­ti­on in­tensity in hardwa­re com­po­nent, re­al-ti­me ope­ra­ti­on, struc­tu­ral fo­cus on VLSI imple­men­ta­ti­on, low de­ve­lop­ment ti­me and low cost. It is sug­gested to eval­ua­te the de­ve­lo­ped hardwa­re com­po­nents of neu­ral net­works ac­cording to the ef­fi­ci­ency of the eq­uipment use, ta­king in­to ac­co­unt the comple­xity of the com­po­nent imple­men­ta­ti­on al­go­rithm, the num­ber of ex­ternal in­terfa­ce pins, the ho­mo­ge­ne­ity of the com­po­nent struc­tu­re and re­la­ti­onship of the ti­me of ba­sic neu­ro-ope­ra­ti­on with the eq­uipment costs. The ma­in ways to control the in­tensity of cal­cu­la­ti­ons in hardwa­re com­po­nents are the cho­ice of the num­ber and bit ra­tes of da­ta pro­ces­sing paths, chan­ging the du­ra­ti­on of the work cycle by cho­osing the spe­ed of the ele­ment ba­se and the comple­xity of ope­ra­ti­ons imple­men­ted by the con­ve­yor. The pa­ral­lel ver­ti­cal-gro­up da­ta pro­ces­sing met­hods are pro­po­sed for the imple­men­ta­ti­on of hardwa­re com­po­nents of neu­ral net­works with co­or­di­na­ted pa­ral­lel-ver­ti­cal control pro­ces­sing, they pro­vi­de control of com­pu­ta­ti­onal in­tensity, re­duc­ti­on of hardwa­re costs and VLSI imple­men­ta­ti­on. A pa­ral­lel ver­ti­cal-gro­up met­hod and struc­tu­re of the com­po­nent of cal­cu­la­ti­on of ma­xi­mum and mi­ni­mum num­bers in ar­rays are de­ve­lo­ped, due to pa­ral­lel pro­ces­sing of a sli­ce from the gro­up of di­gits of all num­bers it pro­vi­des re­duc­ti­on of cal­cu­la­ti­on ti­me ma­inly de­pen­ding on bit si­ze of num­bers. The pa­ral­lel ver­ti­cal-gro­up met­hod and struc­tu­re of the com­po­nent for cal­cu­la­ting the sum of squa­res of dif­fe­ren­ces ha­ve be­en de­ve­lo­ped, due to pa­ral­le­li­za­ti­on and se­lec­ti­on of the num­ber of con­ve­yor steps it en­su­res the co­or­di­na­ti­on of in­put da­ta in­tensity with the cal­cu­la­ti­on in­tensity, re­al-ti­me mo­de and high eq­uipment ef­fi­ci­ency. The pa­ral­lel ver­ti­cal-gro­up met­hod and struc­tu­re of sca­lar pro­duct cal­cu­la­ti­on com­po­nents ha­ve be­en de­ve­lo­ped, the cho­ice of bit pro­ces­sing paths and the num­ber of con­ve­yor steps enab­les the co­or­di­na­ti­on of in­put da­ta in­tensity with cal­cu­la­ti­on in­tensity, re­al-ti­me mo­de and high ef­fi­ci­ency of the eq­uipment. It is shown that the use of the de­ve­lo­ped com­po­nents for the synthe­sis of neu­ral net­works with co­or­di­na­ted ver­ti­cal-pa­ral­lel da­ta pro­ces­sing in re­al ti­me will re­du­ce the ti­me and cost of the­ir imple­men­ta­ti­on.


2020 ◽  
Vol 24 (11) ◽  
pp. 5559-5577
Author(s):  
Harriet L. Wilson ◽  
Ana I. Ayala ◽  
Ian D. Jones ◽  
Alec Rolston ◽  
Don Pierson ◽  
...  

Abstract. The epilimnion is the surface layer of a lake typically characterised as well mixed and is decoupled from the metalimnion due to a steep change in density. The concept of the epilimnion (and, more widely, the three-layered structure of a stratified lake) is fundamental in limnology, and calculating the depth of the epilimnion is essential to understanding many physical and ecological lake processes. Despite the ubiquity of the term, however, there is no objective or generic approach for defining the epilimnion, and a diverse number of approaches prevail in the literature. Given the increasing availability of water temperature and density profile data from lakes with a high spatio-temporal resolution, automated calculations, using such data, are particularly common, and they have vast potential for use with evolving long-term globally measured and modelled datasets. However, multi-site and multi-year studies, including those related to future climate impacts, require robust and automated algorithms for epilimnion depth estimation. In this study, we undertook a comprehensive comparison of commonly used epilimnion depth estimation methods, using a combined 17-year dataset, with over 4700 daily temperature profiles from two European lakes. Overall, we found a very large degree of variability in the estimated epilimnion depth across all methods and thresholds investigated and for both lakes. These differences, manifesting over high-frequency data, led to fundamentally different understandings of the epilimnion depth. In addition, estimations of the epilimnion depth were highly sensitive to small changes in the threshold value, complex thermal water column structures, and vertical data resolution. These results call into question the custom of arbitrary method selection and the potential problems this may cause for studies interested in estimating the ecological processes occurring within the epilimnion, multi-lake comparisons, or long-term time series analysis. We also identified important systematic differences between methods, which demonstrated how and why methods diverged. These results may provide rationale for future studies to select an appropriate epilimnion definition in light of their particular purpose and with awareness of the limitations of individual methods. While there is no prescribed rationale for selecting a particular method, the method which defined the epilimnion depth as the shallowest depth, where the density was 0.1 kg m−3 more than the surface density, may be particularly useful as a generic method.


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 299
Author(s):  
Chartwut Thanajiranthorn ◽  
Panida Songram

Associative classification (AC) is a mining technique that integrates classification and association rule mining to perform classification on unseen data instances. AC is one of the effective classification techniques that applies the generated rules to perform classification. In particular, the number of frequent ruleitems generated by AC is inherently designated by the degree of certain minimum supports. A low minimum support can potentially generate a large set of ruleitems. This can be one of the major drawbacks of AC when some of the ruleitems are not used in the classification stage, and thus (to reduce the rule-mapping time), they are required to be removed from the set. This pruning process can be a computational burden and massively consumes memory resources. In this paper, a new AC algorithm is proposed to directly discover a compact number of efficient rules for classification without the pruning process. A vertical data representation technique is implemented to avoid redundant rule generation and to reduce time used in the mining process. The experimental results show that the proposed algorithm archives in terms of accuracy a number of generated ruleitems, classifier building time, and memory consumption, especially when compared to the well-known algorithms, Classification-based Association (CBA), Classification based on Multiple Association Rules (CMAR), and Fast Associative Classification Algorithm (FACA).


Sign in / Sign up

Export Citation Format

Share Document