technical limitation
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 36)

H-INDEX

7
(FIVE YEARS 3)

Author(s):  
Kabiru Yusuf ◽  
Dahiru Sani Shuaibu ◽  
Suleiman Aliyu Babale

In this paper, we investigated the effect of different channel propagation characteristics on the performance of 4G systems from high altitude platforms (HAPs). The use of High-Altitude Platforms for communication purpose in the past focused mostly on the assumption that the platform is quasi stationary. The technical limitation of the assumption was that of ensuring stability in the positioning of the platform in space. The use of antenna steering and other approaches were proposed as a solution to the said problem. In this paper, we proposed a channel model which account for the motion of the platform. This was done by investigating the effect of Doppler shift on the carrier frequency as the signals propagate between the transmitter and receiver while the High-Altitude Platform is in motion. The basic free space model was used and subjected to the frequency variation caused by the continuous random shift due to the motion of the HAPs. The trajectory path greatly affects the system performance. A trajectory of 30km, 100km and 500km radii were simulated. An acute elevation angle was used in the simulation. The proposed model was also compared to two other channel models to illustrate its performance. The results show that the proposed model behave similar to the existing models except at base station ID 35 and 45 where the highest deviation of 20dBm was observed. Other stations that deviated were less than 2dBm.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 72
Author(s):  
Sanghun Jeon ◽  
Ahmed Elsharkawy ◽  
Mun Sang Kim

In visual speech recognition (VSR), speech is transcribed using only visual information to interpret tongue and teeth movements. Recently, deep learning has shown outstanding performance in VSR, with accuracy exceeding that of lipreaders on benchmark datasets. However, several problems still exist when using VSR systems. A major challenge is the distinction of words with similar pronunciation, called homophones; these lead to word ambiguity. Another technical limitation of traditional VSR systems is that visual information does not provide sufficient data for learning words such as “a”, “an”, “eight”, and “bin” because their lengths are shorter than 0.02 s. This report proposes a novel lipreading architecture that combines three different convolutional neural networks (CNNs; a 3D CNN, a densely connected 3D CNN, and a multi-layer feature fusion 3D CNN), which are followed by a two-layer bi-directional gated recurrent unit. The entire network was trained using connectionist temporal classification. The results of the standard automatic speech recognition evaluation metrics show that the proposed architecture reduced the character and word error rates of the baseline model by 5.681% and 11.282%, respectively, for the unseen-speaker dataset. Our proposed architecture exhibits improved performance even when visual ambiguity arises, thereby increasing VSR reliability for practical applications.


2021 ◽  
Author(s):  
Xiaoyang Yu

In order to describe my findings/conclusions systematically, a new semantic system (i.e., a new language) has to be intentionally defined by the present article. Humans are limited in what they know by the technical limitation of their cortical language network. The conventionally-called “physical/objective reality” around my conventionally-called “physical/objective body” is actually a geometric mathematical model (being generated/mathematically-modeled by my brain) – it's actually a subset/component/part/element of my brain’s mind/consciousness. A reality is a situation model (SM). Our universe is an autonomous objective parallel computing automaton (aka state machine) which evolves by itself automatically/unintentionally – wave-particle duality and Heisenberg’s uncertainty principle can be explained under this SM of my brain. Each elementary particle (as a building block of our universe) is an autonomous mathematical entity itself (i.e., a thing in itself). Our universe has the same nature as a Game of Life system – both are autonomous objective parallel-computing automata. If we are happy to accept randomness, then it is obviously possible that all other worlds in the many-worlds interpretation do not exist objectively. The conventionally-called “space” does not exist objectively. “Time” and “matter” are not physical. Consciousness is the subjective-form (aka quale) of the mathematical models (of the objective universe) which are intracorporeally/subjectively used by the control logic of a Turing machine’s program directly-fatedly. A Turing machine’s consciousness or deliberate decisions/choices should not be able to actually/objectively change/control/drive the (autonomous or directly-fated) worldline of any elementary particle within this world. Besides the Schrodinger equation (or another mathematical equation/function which is yet to be discovered) which is a valid/correct/factual causality of our universe, every other causality (of our universe) is either invalid/incorrect/counterfactual or can be proved by deductive inference based on the Schrodinger equation (or the aforementioned yet-to-be-discovered mathematical equation/function) only. Consciousness plays no causal role (“epiphenomenalism”), or in other words, any cognitive/behavioural activity can in principle be carried out without consciousness (“conscious inessentialism”). If the “loop quantum gravity” theory is correct, then time/space does not actually/objectively exist in the objective-evolution of the objective universe, or in other words, we should not use the subjective/mental concept of “time”, “state” or “space” to describe/imagine the objective-evolution of our universe.


2021 ◽  
Author(s):  
Xiaoyang Yu

In order to describe my findings/conclusions systematically, a new semantic system (i.e., a new language) has to be intentionally defined by the present article. Humans are limited in what they know by the technical limitation of their cortical language network. The conventionally-called “physical/objective reality” around my conventionally-called “physical/objective body” is actually a geometric mathematical model (being generated/mathematically-modeled by my brain) – it's actually a subset/component/part/element of my brain’s mind/consciousness. A reality is a situation model (SM). Our universe is an autonomous objective parallel computing automaton (aka state machine) which evolves by itself automatically/unintentionally – wave-particle duality and Heisenberg’s uncertainty principle can be explained under this SM of my brain. Each elementary particle (as a building block of our universe) is an autonomous mathematical entity itself (i.e., a thing in itself). Our universe has the same nature as a Game of Life system – both are autonomous objective parallel-computing automata. If we are happy to accept randomness, then it is obviously possible that all other worlds in the many-worlds interpretation do not exist objectively. The conventionally-called “space” does not exist objectively. “Time” and “matter” are not physical. Consciousness is the subjective-form (aka quale) of the mathematical models (of the objective universe) which are intracorporeally/subjectively used by the control logic of a Turing machine’s program directly-fatedly. A Turing machine’s consciousness or deliberate decisions/choices should not be able to actually/objectively change/control/drive the (autonomous or directly-fated) worldline of any elementary particle within this world. Besides the Schrodinger equation (or another mathematical equation/function which is yet to be discovered) which is a valid/correct/factual causality of our universe, every other causality (of our universe) is either invalid/incorrect/counterfactual or can be proved by deductive inference based on the Schrodinger equation (or the aforementioned yet-to-be-discovered mathematical equation/function) only. Consciousness plays no causal role (“epiphenomenalism”), or in other words, any cognitive/behavioural activity can in principle be carried out without consciousness (“conscious inessentialism”). If the “loop quantum gravity” theory is correct, then time/space does not actually/objectively exist in the objective-evolution of the objective universe, or in other words, we should not use the subjective/mental concept of “time”, “state” or “space” to describe/imagine the objective-evolution of our universe.


2021 ◽  
Vol 13 (21) ◽  
pp. 12150
Author(s):  
Natalia Pires Martins ◽  
Sumit Srivastava ◽  
Francisco Veiga Simão ◽  
He Niu ◽  
Priyadharshini Perumal ◽  
...  

Medium and highly sulfidic tailings are high-volume wastes that can lead to severe environmental damage if not properly managed. Due to the high content of sulfide minerals, these tailings can undergo weathering if put in contact with oxygen and water, generating acid mine drainage (AMD). The moderate-to-high sulfide content is also an important technical limitation for their implementation in the production of construction materials. This paper reviews the use of sulfidic tailings as raw material in construction products, with a focus on cement, concrete, and ceramics. When used as aggregates in concrete, this can lead to concrete degradation by internal sulfate attack. In building ceramics, their implementation without prior treatment is undesirable due to the formation of black reduction core, efflorescence, SOx emissions, and their associated costs. Moreover, their intrinsic low reactivity represents a barrier for their use as supplementary cementitious materials (SCMs) and as precursors for alkali-activated materials (AAMs). Nevertheless, the production of calcium sulfoaluminate (CSA) cement can be a suitable path for the valorization of medium and highly sulfidic tailings. Otherwise difficult to upcycle, sulfidic tailings could be used in the clinker raw meal as an alternative raw material. Not only the SO3 and SiO2-rich bulk material is incorporated into reactive clinker phases, but also some minor constituents in the tailings may contribute to the production of such low-CO2 cements at lower temperatures. Nevertheless, this valorization route remains poorly explored and demands further research.


2021 ◽  
Author(s):  
Federica Liccardo ◽  
Matteo Lo Monte ◽  
Brunella Corrado ◽  
Martina Veneruso ◽  
Simona Celentano ◽  
...  

Currently, a major technical limitation of microscopy based image analysis is the linkage error – which describes the distance between e.g. the target epitope of cellular protein to the fluorescence emitter, which position is finally detected in a microscope. With continuously improving resolution of today′s (super–resolution) microscopes, the linkage errors can severely hamper the correct interpretation of images and is usually introduced in experiments by the use of standard intracellular staining reagents such as fluorescently labelled antibodies. The linkage error of standard labelled antibodies is caused by the size of the antibody and the random distribution of fluorescent emitters on the antibody surface. Together, these two factors account for a fluorescence displacement of ~40nm when staining proteins by indirect immunofluorescence; and ~20nm when staining with fluorescently coupled primary antibodies. In this study, we describe a class of staining reagents that effectively reduce the linkage error by more than five–fold when compared to conventional staining techniques. These reagents, called Fluo–N–Fabs, consist of an antigen binding fragment of a full-length antibody (Fab / fragment antigen binding) that is selectively conjugated at the N-terminal amino group with fluorescent organic molecules, thereby reducing the distance between the fluorescent emitter and the protein target of the analysis. Fluo–N–Fabs also exhibit the capability to penetrate tissues and highly crowded cell compartments, thus allowing for the efficient detection of cellular epitopes of interest in a wide range of fixed samples. We believe this class of reagents realize an unmet need in cell biological super resolution imaging studies where the precise localization of the target of interest is crucial for the understanding of complex biological phenomena.


Author(s):  
Ikuo Kurisaki ◽  
Shigenori Tanaka

Multimeric protein complexes are molecular apparatuses to regulate biological systems and often determine their fate. Among proteins forming such molecular assemblies, amyloid proteins have drawn attention over a half-century since amyloid fibril formation of these proteins is supposed to be a common pathogenic cause for neurodegenerative diseases. This process is triggered by the accumulation of fibril-like aggregates, while the microscopic mechanisms are mostly elusive due to technical limitation of experimental methodologies in individually observing each of diverse aggregate species in the aqueous solution. We then addressed this problem by employing atomistic molecular dynamics simulations for the paradigmatic amyloid protein, amyloid-β (1-42) (Aβ ). Seven different dimeric forms of oligomeric Aβ fibril-like aggregate in aqueous solution, ranging from tetramer to decamer, were considered. We found additive effects of the size of these fibril-like aggregates on their thermodynamic stability and have clarified kinetic suppression of protomer-protomer dissociation reactions at and beyond the point of pentamer dimer formation. This observation was obtained from the specific combination of the Aβ protomer structure and the physicochemical condition that we here examined, while it is worthwhile to recall that several amyloid fibrils take dimeric forms of their protomers. We could thus conclude that the stable formation of fibril-like protomer dimer should be involved in a turning point where rapid growth of amyloid fibrils is triggered.


2021 ◽  
Author(s):  
Shiori Tanaka ◽  
Shingo Kanemura ◽  
Masaki Okumura ◽  
Kazuyuki Iwaikawa ◽  
Kenichi Funamoto ◽  
...  

Abstract Surface functionalization is a key process in rendering various materials biocompatible. Whereas a number of techniques and technologies have been developed for the purpose of biofunctionalization, plasma treatment enables highly efficient surface modification. Extending plasma treatment to biomolecules in the liquid phase will control biofunctionalization via a simple process. However, interactions between plasma discharge and biomolecules or solvents are poorly understood, potentially leading to the technical limitation as to the utility of plasma treatment. In this study, we developed a technology for substrate biofunctionalization that does not require surface modification but involves direct treatment of a collagen molecules with liquid-phase plasma discharge. Biofunctionalization of collagen by plasma treatment comprises three processes that increase its reactivity with hydrophobic substrates: (1) charge-dependent changes in surface and interfacial properties of the collagen solution; (2) local conformational changes of the collagen molecules without their global structural alterations; and (3) induction of a micelle-like association formed by collagen molecules. We anticipate such plasma-induced functionalization of protein molecules to provide a versatile technique in the applications of biomaterials, including those related to pharmaceuticals and cosmetics.


2021 ◽  
Author(s):  
Shiori Tanaka ◽  
Shingo Kanemura ◽  
Masaki Okumura ◽  
Kazuyuki Iwaikawa ◽  
Kenichi Funamoto ◽  
...  

Abstract Surface functionalization is a key process in rendering various materials biocompatible. Whereas a number of techniques and technologies have been developed for the purpose of biofunctionalization, plasma treatment enables highly efficient surface modification. Extending plasma treatment to biomolecules in the liquid phase will control biofunctionalization via a simple process. However, interactions between plasma discharge and biomolecules or solvents are poorly understood, potentially leading to the technical limitation as to the utility of plasma treatment. In this study, we developed a technology for substrate biofunctionalization that does not require surface modification but involves direct treatment of a collagen molecules with liquid-phase plasma discharge. Biofunctionalization of collagen by plasma treatment comprises three processes that increase its reactivity with hydrophobic substrates: (1) charge-dependent changes in surface and interfacial properties of the collagen solution; (2) local conformational changes of the collagen molecules without their global structural alterations; and (3) induction of a micelle-like association formed by collagen molecules. We anticipate such plasma-induced functionalization of protein molecules to provide a versatile technique in the applications of biomaterials, including those related to pharmaceuticals and cosmetics.


2021 ◽  
Vol 15 (37) ◽  
Author(s):  
Marcel Rolf Pfeifer

Purpose of the article: The paper focuses on the potentials and benefits controlling provides for companies in the transition period towards industry 4.0. Operative production controlling provides data that shall be used in the future to apply the concept of smart factories. This article proposes a controlling archtitecture based on computer-aided standardization. Methodology/methods: The paper develops an architecture on operational production controlling based on an international literature review. Literature on controlling 4.0 are found mostly in publications in German language. While this literature has its focus on controlling as a whole or on strategic controlling, the paper has a look on operational controlling and its further usage and development towards smart factories. Scientific aim: The aim of this article is to develop a model of an operational production controlling architecture that is able to suite the requirements of smart factories by using computer-aided standarization. Findings: Research is working on concepts for industry 4.0 and its way towards real implementation. Competitive advantage in industry 4.0 is created through digitization and robotization. An architecture that fully complies to industry 4.0 is still waiting in real companies due to technical limitation in data storing, retrieval and processing as well as storage capacities. Conclusions: The paper discussed the devlopment of a controlling architecture suitable for industry 4.0. Already today controlling is making use of data. Smart factories shall make use of production data. Production controlling together with the CAS, that is able to provide standardized data on all manufacturing, maintenance, and auxiliary processes, systems are able to make a step forward towards smart factories. The concept of production controlling combined with the strengths of a CAS may be seen as the basis from which to target smart factories and industry 4.0.


Sign in / Sign up

Export Citation Format

Share Document