scholarly journals An Advanced Dynamic Visualization Method of Global Ocean Tide with Multi-core Processor Based on OpenGL

2021 ◽  
Vol 4 ◽  
pp. 1-4
Author(s):  
Hao Meng ◽  
Wei-Ming Xu ◽  
Tian-Yang Liu ◽  
Zhi-Yuan Shi ◽  
Zhou-Yang Dong

Abstract. In terms of ocean tide visualization, to meet the requirement of both display range and operational efficiency, an advanced method is proposed, in which the tide height is rapidly computed with global tide model EOT10a, and dynamically displayed by OpenGL. Aiming at the large amounts of calculation of global tide height, the feature of multicore processor is integrated into the method. The experiment shows that, compared to a single-core processor, when using a 6-core processor, the speedup ratio is about 5.4, parallel efficiency reaches 90%, and 880 000 tide heights can be calculated per second. Eventually, the result would be output as a tide height graph by OpenGL. This method could be a useful tool for marine cartography due to the large display range and the high efficiency.

The most precise way of estimating the dissipation of tidal energy in the oceans is by evaluating the rate at which work is done by the tidal forces and this quantity is completely described by the fundamental harmonic in the ocean tide expansion that has the same degree and order as the forcing function. The contribution of all other harmonics to the work integral must vanish. These harmonics have been estimated for the principal M 2 tide using several available numerical models and despite the often significant difference in the detail of the models, in the treatment of the boundary conditions and in the way dissipating forces are introduced, the results for the rate at which energy is dissipated are in good agreement. Equivalent phase lags, representing the global ocean-solid Earth response to the tidal forces and the rates of energy dissipation have been computed for other tidal frequencies, including the atmospheric tide, by using available tide models, age of tide observations and equilibrium theory. Orbits of close Earth satellites are periodically perturbed by the combined solid Earth and ocean tide and the delay of these perturbations compared with the tide potential defines the same terms as enter into the tidal dissipation problem. They provide, therefore, an independent estimate of dissipation. The results agree with the tide calculations and with the astronomical estimates. The satellite results are independent of dissipation in the Moon and a comparison of astronomical, satellite and tidal estimates of dissipation permits a separation of energy sinks in the solid Earth, the Moon and in the oceans. A precise separation is not yet possible since dissipation in the oceans dominates the other two sinks: dissipation occurs almost exclusively in the oceans and neither the solid Earth nor the Moon are important energy sinks. Lower limits to the Q of the solid Earth can be estimated by comparing the satellite results with the ocean calculations and by comparing the astronomical results with the latter. They result in Q > 120. The lunar acceleration n , the Earth’s tidal acceleration O T and the total rate of energy dissipation E estimated by the three methods give astronomical based estimate —1.36 —28±3 —7.2 ± 0.7 4.1±0.4 satellite based estimate —1.03 —24 ±5 — 6.4 ± 1.5 3.6±0.8 numerical tide model — 1.49 —30 ±3 —7.5± 0.8 4.5±0.5 The mean value for O T corresponds to an increase in the length of day of 2.7 ms cy -1 . The non-tidal acceleration of the Earth is (1.8 ± 1.0) 10 -22 s ~2 , resulting in a decrease in the length of day of 0.7 ± 0.4 ms cy -1 and is barely significant. This quantity remains the most unsatisfactory of the accelerations. The nature of the dissipating mechanism remains unclear but whatever it is it must also control the phase of the second degree harmonic in the ocean expansion. It is this harmonic that permits the transfer of angular momentum from the Earth to the Moon but the energy dissipation occurs at frequencies at the other end of the tide’s spatial spectrum. The efficacity of the break-up of the second degree term into the higher modes governs the amount of energy that is eventually dissipated. It appears that the break-up is controlled by global ocean characteristics such as the ocean­-continent geometry and sea floor topography. Friction in a few shallow seas does not appear to be as important as previously thought: New estimates for dissipation in the Bering Sea being almost an order of magnitude smaller than earlier estimates. If bottom friction is important then it must be more uniformly distributed over the world's continental shelves. Likewise, if turbulence provides an important dissipation mechanism it must be fairly uniformly distributed along, for example, coastlines or along continental margins. Such a global distribution of the dissipation makes it improbable that there has been a change in the rate of dissipation during the last few millennium as there is no evidence of changes in ocean volume, or ocean geometry or sea level beyond a few metres. It also suggests that the time scale problem can be resolved if past ocean-continent geometries led to a less efficient breakdown of the second degree harmonic into higher degree harmonics.


Author(s):  
Francisco Carlos Junior ◽  
Ivan Silva ◽  
Ricardo Jacobi

Reconfigurable architectures have been widely used as single core processor accelerators. In the multi-core era, however, it is necessary to review the way that reconfigurable arrays are integrated into multi-core processor. Generally, a set of reconfigurable functional units are employed in a similar way as they are used in single core processors. Unfortunately, a considerable increase in the area ensues from this practice. Besides, in applications with unbalanced workload in their threads this approach can lead to a inefficient use of the reconfigurable architecture in cores with a low or even idle workload. To cope with this issue, this work proposes and evaluates a partially shared thin reconfigurable array, which allows to share reconfigurable resources among the processor's cores. Sharing is performed dynamically by the configuration scheduler hardware. The results shows that the sharing mechanism provided 76% of energy savings, improving the performance 41% in average when compared with a version without the proposed reconfigurable array. A comparison with a version of the reconfigurable array without the sharing mechanism was performed and shows that the sharing mechanism improved up to 11.16% in the system performance.


2020 ◽  
Author(s):  
Hongbo Tan ◽  
Chongyong Shen ◽  
Guiju Wu

<p>Solid Earth is affected by tidal cycles triggered by the gravity attraction of the celestial bodies. However, about 70% the Earth is covered with seawater which is also affected by the tidal forces. In the coastal areas, the ocean tide loading (OTL) can reach up to 10% of the earth tide, 90% for tilt, and 25% for strain (Farrell, 1972). Since 2007, a high-precision continuous gravity observation network in China has been established with 78 stations. The long-term high-precision tidal data of the network can be used to validate, verifying and even improve the ocean tide model (OTM).</p><p>In this paper, tidal parameters of each station were extracted using the harmonic analysis method after a careful editing of the data. 8 OTMs were used for calculating the OTL. The results show that the Root-Mean-Square of the tidal residuals (M<sub>0</sub>) vary between 0.078-1.77 μgal, and the average errors as function of the distance from the sea for near(0-60km), middle(60-1000km) and far(>1000km) stations are 0.76, 0.30 and 0.21 μgal. The total final gravity residuals (Tx) of the 8 major constituents (M<sub>2</sub>, S<sub>2</sub>, N<sub>2</sub>, K<sub>2</sub>, K<sub>1</sub>, O<sub>1</sub>, P<sub>1</sub>, Q<sub>1</sub>) for the best OTM has amplitude ranging from 0.14 to 3.45 μgal. The average efficiency for O<sub>1</sub> is 77.0%, while 73.1%, 59.6% and 62.6% for K<sub>1</sub>, M<sub>2</sub> and Tx. FES2014b provides the best corrections for O<sub>1</sub> at 12 stations, while SCHW provides the best for K<sub>1 </sub><sub>,</sub>M<sub>2</sub>and Tx at 12,8and 9 stations. For the 11 costal stations, there is not an obvious best OTM. The models of DTU10, EOT11a and TPXO8 look a litter better than FES2014b, HAMTIDE and SCHW. For the 17 middle distance stations, SCHW is the best OTM obviously. For the 7 far distance stations, FES2014b and SCHW model are the best models. But the correction efficiency is worse than the near and middle stations’.</p><p>The outcome is mixed: none of the recent OTMs performs the best for all tidal waves at all stations. Surprisingly, the Schwiderski’s model although is 40 years old with a coarse resolution of 1° x 1° is performing relative well with respect to the more recent OTM. Similar results are obtained in Southeast Asia (Francis and van Dam, 2014). It could be due to systematic errors in the surroundings seas affecting all the ocean tides models. It's difficult to detect, but invert the gravity attraction and loading effect to map the ocean tides in the vicinity of China would be one way.</p>


2019 ◽  
Vol 8 (3) ◽  
pp. 104 ◽  
Author(s):  
Weilian Li ◽  
Jun Zhu ◽  
Yunhao Zhang ◽  
Yungang Cao ◽  
Ya Hu ◽  
...  

Scientific and appropriate visualizations increase the effectiveness and readability of disaster information. However, existing fusion visualization methods for disaster scenes have some deficiencies, such as the low efficiency of scene visualization and difficulties with disaster information recognition and sharing. In this paper, a fusion visualization method for disaster information, based on self-explanatory symbols and photorealistic scene cooperation, was proposed. The self-explanatory symbol and photorealistic scene cooperation method, the construction of spatial semantic rules, and fusion visualization with spatial semantic constraints were discussed in detail. Finally, a debris flow disaster was selected for experimental analysis. The experimental results show that the proposed method can effectively realize the fusion visualization of disaster information, effectively express disaster information, maintain high-efficiency visualization, and provide decision-making information support to users involved in the disaster process.


Author(s):  
Hiroto Namihira

This chapter proposes a new educational methodology for theoretical contents. It aims to effectively transmit theoretical content meanings. Here, the effects of content visualization enhance the transmission of meaning. By processing visual information, the human brain can immediately understand the mutual relationships between elements in addition to the whole meaning. Comprehension becomes increasingly effective when movement is added to static information. The new educational methodology proposed here is based on such visualization. It is called “The Dynamic Visualization Method.” It is designed so students can visually set allowable conditions before processing them. This selective freedom enables students to extract their hidden leaning interests. Mathematical processes were used to verify the effectiveness of this methodology. A variety of items were thus adopted ranging from the elementary-school to university levels. The contents of those items are visualized in this chapter. The educational effects are then discussed.


2019 ◽  
Vol 577 ◽  
pp. 123988
Author(s):  
Guozheng Zhi ◽  
Zhenliang Liao ◽  
Wenchong Tian ◽  
Xin Wang ◽  
Juxiang Chen

2007 ◽  
Vol 28 (3) ◽  
pp. 235-255 ◽  
Author(s):  
Alireza Azmoudeh Ardalan ◽  
Hassan Hashemi-Farahani
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document