scholarly journals Increasing the Competitiveness of Tidal Systems by Means of the Improvement of Installation and Maintenance Maneuvers in First Generation Tidal Energy Converters—An Economic Argumentation

Energies ◽  
2019 ◽  
Vol 12 (13) ◽  
pp. 2464 ◽  
Author(s):  
Eva Segura ◽  
Rafael Morales ◽  
José A. Somolinos

The most important technological advances in tidal systems are currently taking place in first generation tidal energy converters (TECs), which are installed in areas in which the depth does not exceed 40 m. Some of these devices are fixed to the seabed and it is, therefore, necessary to have special high performance ships to transport them from the base port to the tidal farm and to subsequently recover the main units of these devices. These ships are very costly, thus making the installation costs very high and, in some cases, probably unfeasible. According to what has occurred to date, the costs of the installation and maintenance procedures depend, to a great extent, on the reliability and accessibility of the devices. One of the possible solutions as regards increasing system performance and decreasing the costs of the installation and maintenance procedures is the definition of automated maneuvers, which will consequently influence: (i) an increase in the competitiveness of these technologies; (ii) a reduction in the number and duration of installation and maintenance operations; (iii) less human intervention, or (iv) the possibility of using cheaper general purpose ships rather than high cost special vessels for maintenance purposes, among others. In this research, we propose a definition of the procedures required for the manual and automated installation and maintenance maneuvers of gravity-based first generation TECs. This definition will allow us to quantify the costs of both the manual and automated operations in a more accurate manner and enable us to determine the reduction in the cost of the automated installation and maintenance procedures. It will also enable us to demonstrate that the automation of these maneuvers may be an interesting solution by which to improve the competitiveness of tidal systems in the near future.

Author(s):  
M. B. Giles ◽  
I. Reguly

High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers.


2021 ◽  
Author(s):  
Francisco J. Castellanos ◽  
Jose J. Valero-Mas ◽  
Jorge Calvo-Zaragoza

AbstractThe k-nearest neighbor (kNN) rule is one of the best-known distance-based classifiers, and is usually associated with high performance and versatility as it requires only the definition of a dissimilarity measure. Nevertheless, kNN is also coupled with low-efficiency levels since, for each new query, the algorithm must carry out an exhaustive search of the training data, and this drawback is much more relevant when considering complex structural representations, such as graphs, trees or strings, owing to the cost of the dissimilarity metrics. This issue has generally been tackled through the use of data reduction (DR) techniques, which reduce the size of the reference set, but the complexity of structural data has historically limited their application in the aforementioned scenarios. A DR algorithm denominated as reduction through homogeneous clusters (RHC) has recently been adapted to string representations but as obtaining the exact median value of a set of string data is known to be computationally difficult, its authors resorted to computing the set-median value. Under the premise that a more exact median value may be beneficial in this context, we, therefore, present a new adaptation of the RHC algorithm for string data, in which an approximate median computation is carried out. The results obtained show significant improvements when compared to those of the set-median version of the algorithm, in terms of both classification performance and reduction rates.


2017 ◽  
Vol 19 (74) ◽  
pp. 67-70
Author(s):  
M. Sharan ◽  
K. Grymak

The analysis of publication related to the usage of embryo transplantation method in breeding sheep had been conducted. It is proved that the theoretical basis of the method of embryo transplantation is the large number of germ cells in the ovaries of females, most of which during the  usual method of reproduction does not participate in these processes, and the high probability of inheriting genetic characteristics of embryos after transplantation by  recipient. On this basis, from females throughout her life we can get a few dozen of descendants. The main aspects of embryos transplantation technology is to development of new methods that will ensure obtaining of the greatest number of good-quality embryos with desirable genotypes, both by the induction of multiple ovulation and by cultivation of  follicular oocytes outside the body. It has been defined, that its usage is limited by a number of factors that are not well studіed and many of them are contradictional, in particular unpredictable results of the multiple ovulation induction. The development of the method of predicting the results of polyovulation will significantly increase the efficiency of embryos transplantation and significantly reduce the cost of labor and financial resources. It has been shown, that one of the main factors of making transplantation more practical is to develop the methods of embryos cryopreservation. This will simplify the selection of recipients and donors by the sexual cycle, will further long distance transportation and will enable the transplantation in the planned breeding herds. The further of the method, along with modern advances of genetics, embryology, endocrinology, will in the near future lead to develop a comprehensive method for the accelerated creation in sheep breeding flocks of high performance animals.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1735
Author(s):  
Alessandra Fais ◽  
Giuseppe Lettieri ◽  
Gregorio Procissi ◽  
Stefano Giordano ◽  
Francesco Oppedisano

One of the most challenging tasks for network operators is implementing accurate per-packet monitoring, looking for signs of performance degradation, security threats, and so on. Upon critical event detection, corrective actions must be taken to keep the network running smoothly. Implementing this mechanism requires the analysis of packet streams in a real-time (or close to) fashion. In a softwarized network context, Stream Processing Systems (SPSs) can be adopted for this purpose. Recent solutions based on traditional SPSs, such as Storm and Flink, can support the definition of general complex queries, but they show poor performance at scale. To handle input data rates in the order of gigabits per seconds, programmable switch platforms are typically used, although they offer limited expressiveness. With the proposed approach, we intend to offer high performance and expressive power in a unified framework by solely relying on SPSs for multicores. Captured packets are translated into a proper tuple format, and network monitoring queries are applied to tuple streams. Packet analysis tasks are expressed as streaming pipelines, running on general-purpose programmable network devices, and a second stage of elaboration can process aggregated statistics from different devices. Experiments carried out with an example monitoring application show that the system is able to handle realistic traffic at a 10 Gb/s speed. The same application scales almost up to 20 Gb/s speed thanks to the simple optimizations of the underlying framework. Hence, the approach proves to be viable and calls for the investigation of more extensive optimizations to support more complex elaborations and higher data rates.


2020 ◽  
Vol 2 (2) ◽  
pp. 128-143
Author(s):  
Tedi Budiman

Financial information system is an information system that provides information to individuals or groups of people, both inside and outside the company that contains financial problems and information about the flow of money for users in the company. Financial information systems are used to solve financial problems in a company, by meeting three financial principles: fast, safe, and inexpensive.Quick principle, the intention is that financial information systems must be able to provide the required data on time and can meet the needs. The Safe Principle means that the financial information system must be prepared with consideration of internal controls so that company assets are maintained. The Principle of Inexpensive, the intention is that the cost of implementing a financial information system must be reduced so that it is relatively inexpensive.Therefore we need technology media that can solve financial problems, and produce financial information to related parties quickly, safely and cheaply. One example of developing information technology today is computer technology and internet. Starting from financial problems and technological advances, the authors make a website-based financial management application to facilitate the parties that perform financial management and supervision.Method of development application program is used Waterfall method, with the following stages: Software Requirement Analysis, Software Design, Program Code Making, Testing, Support, Maintenance.


2000 ◽  
Vol 151 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Stephan Wild-Eck ◽  
Willi Zimmermann

Two large-scale surveys looking at attitudes towards forests, forestry and forest policy in the second half ofthe nineties have been carried out. This work was done on behalf of the Swiss Confederation by the Chair of Forest Policy and Forest Economics of the Federal Institute of Technology (ETH) in Zurich. Not only did the two studies use very different methods, but the results also varied greatly as far as infrastructure and basic conditions were concerned. One of the main differences between the two studies was the fact that the first dealt only with mountainous areas, whereas the second was carried out on the whole Swiss population. The results of the studies reflect these differences:each produced its own specific findings. Where the same (or similar) questions were asked, the answers highlight not only how the attitudes of those questioned differ, but also views that they hold in common. Both surveys showed positive attitudes towards forests in general, as well as a deep-seated appreciation ofthe forest as a recreational area, and a positive approach to tending. Detailed results of the two surveys will be available in the near future.


2014 ◽  
Vol 36 (4) ◽  
pp. 790-798
Author(s):  
Kai ZHANG ◽  
Shu-Ming CHEN ◽  
Yao-Hua WANG ◽  
Xi NING

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adriana M. De Mendoza ◽  
Soňa Michlíková ◽  
Johann Berger ◽  
Jens Karschau ◽  
Leoni A. Kunz-Schughart ◽  
...  

AbstractRadiotherapy can effectively kill malignant cells, but the doses required to cure cancer patients may inflict severe collateral damage to adjacent healthy tissues. Recent technological advances in the clinical application has revitalized hyperthermia treatment (HT) as an option to improve radiotherapy (RT) outcomes. Understanding the synergistic effect of simultaneous thermoradiotherapy via mathematical modelling is essential for treatment planning. We here propose a theoretical model in which the thermal enhancement ratio (TER) relates to the cell fraction being radiosensitised by the infliction of sublethal damage through HT. Further damage finally kills the cell or abrogates its proliferative capacity in a non-reversible process. We suggest the TER to be proportional to the energy invested in the sensitisation, which is modelled as a simple rate process. Assuming protein denaturation as the main driver of HT-induced sublethal damage and considering the temperature dependence of the heat capacity of cellular proteins, the sensitisation rates were found to depend exponentially on temperature; in agreement with previous empirical observations. Our findings point towards an improved definition of thermal dose in concordance with the thermodynamics of protein denaturation. Our predictions well reproduce experimental in vitro and in vivo data, explaining the thermal modulation of cellular radioresponse for simultaneous thermoradiotherapy.


1983 ◽  
Vol 31 (1_suppl) ◽  
pp. 60-76
Author(s):  
Patricia A. Morgan

Patricia Morgan's paper describes what happens when the state intervenes in the social problem of wife-battering. Her analysis refers to the United States, but there are clear implications for other countries, including Britain. The author argues that the state, through its social problem apparatus, manages the image of the problem by a process of bureaucratization, professionalization and individualization. This serves to narrow the definition of the problem, and to depoliticize it by removing it from its class context and viewing it in terms of individual pathology rather than structure. Thus refuges were initially run by small feminist collectives which had a dual objective of providing a service and promoting among the women an understanding of their structural position in society. The need for funds forced the groups to turn to the state for financial aid. This was given, but at the cost to the refuges of losing their political aims. Many refuges became larger, much more service-orientated and more diversified in providing therapy for the batterers and dealing with other problems such as alcoholism and drug abuse. This transformed not only the refuges but also the image of the problem of wife-battering.


2021 ◽  
Vol 62 (5) ◽  
Author(s):  
Stefan Hoerner ◽  
Shokoofeh Abbaszadeh ◽  
Olivier Cleynen ◽  
Cyrille Bonamy ◽  
Thierry Maître ◽  
...  

Abstract State-of-the-art technologies for wind and tidal energy exploitation focus mostly on axial turbines. However, cross-flow hydrokinetic tidal turbines possess interesting features, such as higher area-based power density in array installations and shallow water, as well as a generally simpler design. Up to now, the highly unsteady flow conditions and cyclic blade stall have hindered deployment at large scales because of the resulting low single-turbine efficiency and fatigue failure challenges. Concepts exist which overcome these drawbacks by actively controlling the flow, at the cost of increased mechatronical complexity. Here, we propose a bioinspired approach with hyperflexible turbine blades. The rotor naturally adapts to the flow through deformation, reducing flow separation and stall in a passive manner. This results in higher efficiency and increased turbine lifetime through decreased structural loads, without compromising on the simplicity of the design. Graphic abstract


Sign in / Sign up

Export Citation Format

Share Document