general solutions
Recently Published Documents


TOTAL DOCUMENTS

556
(FIVE YEARS 74)

H-INDEX

30
(FIVE YEARS 3)

2021 ◽  
Vol 15 (1) ◽  
pp. 441-462
Author(s):  
Mereen H.F. Rasheed ◽  
Ayad Z.S. Agha ◽  
Bahman O. Taha

Background: The tangent of the relationship between bond stress and displacement (slip) is called the modulus of displacement and gives the basis for the theory. This theory is used to determine the stress distribution along the spliced reinforcement bars. Objective: This research presents a modification on the theory of the modulus of displacement to determine the stress distribution along the spliced reinforcement bond for fibrous reinforced concrete. Methods: 1- General differential equations are derived for concrete stress, stress in reinforcement bars and bond stress between reinforcement bars and surrounding concrete. 2-The general solutions of these D.E. are determined and Excel data sheets are prepared to apply these solutions and determine the concrete, steel and bond stresses. Results: Excel data sheets are prepared to determine the concrete, steel and bond stresses. The stresses are determined along the bar splice length considering the effect of steel fiber content. Conclusion: The maximum concrete stress is obtained at center x=0 and minimum at . Maximum bond stress obtained at and minimum at the center. The maximum steel stress at and minimum at . The value of (σcmax) increased linearly with increasing of (ρ). The concrete stress increased nonlinearly with (ρ%) and linearly with ( fy) and (fc’). Also increasing of (k) and bar diameter have small effects. The value of bond stress decreased linearly with (Qf) and (ρ%).


InterConf ◽  
2021 ◽  
pp. 323-328
Author(s):  
Оleksandr Palii ◽  
Erik Lapkhanov

The usage of space inflatable platform to accommodate payload is proposed in the paper. This platform includes thin-film elastic envelope, cable system for fixing payload elements on the shell, pressurization systems, energy system, thermal control systems, attitude and stabilization control systems and propulsion system. General solutions for the development of the listed systems of a space inflatable platform are described.


2021 ◽  
Author(s):  
◽  
Isidro M. Alvarez

<p>Learning is an important activity through which humanity has incrementally improved accomplishing tasks by adapting knowledge and methods based on the related feedback. Although learning is natural to humans, it is much more difficult to achieve in the technological world as tasks are often learned in isolation. Software is capable of learning novel techniques and algorithms in order to solve these basic, individual problems, however transferring said knowledge to other problems in the same or related domains presents challenges. Solutions often cannot be enumerated to discover the best one as many problems of interest can be intractable in terms of the resources needed to successfully complete them. However, many such problems contain key building blocks of knowledge that can be leveraged to achieve a suitable solution. These building blocks encapsulate important structural regularities of the problem. A technique that can learn these regularities without enumeration,may produce general solutions that apply to similar problems of any length. This implies reusing learned information.  In order to reuse learned blocks of knowledge, it is important that a program be scalable and flexible. This requires a program capable of taking knowledge from a previous task and applying it to a more complex problem or a problem with a similar pattern. This is anticipated to enable the program to complete the new task in a practical amount of time and with reasonable amounts of resources.  In machine learning, the degree of human intervention in solving problems is often important in many tasks. It is generally necessary for a human to provide input to direct and improve learning. In the field of Developmental Learning there is the idea known as the Threshold Concept (TC). A TC is transformative information which advocates learning. TCs are important because without them, the learner cannot progress. In addition, TCs need to be learned in a particular order, much like a curriculum, thus providing the student with viable progress towards learning more difficult ideas at a faster pace than otherwise. Therefore, human input to a learning algorithm can be to partition a problem into constituent subproblems. This is a principal concept of Layered Learning (LL),where a sequence of sub-problems are learned. The sub-problems are self-contained stages which have been separated by a human. This technique is necessary for tasks in which learning a direct mapping from inputs to outputs is intractable given existing learning algorithms.  One of the first artificial learning systems developed is Learning Classifier Systems (LCSs). Past work has extended LCSs to provide more expressivity by using richer representations. One such representation is tree-based and is common to the Genetic Programming (GP) technique. GP is part of the Evolutionary Computation (EC) paradigm and produces solutions represented by trees. The tree nodes can contain functions, and leaf nodes problem features, giving GP a rich representation. A more recent technique is Code Fragments (CFs). CFs are GP-like sub-trees with an initial maximum height of two. Initially, CFs contained hard-coded functions at the root nodes and problem features or previously learned CFs at the leaf nodes of the sub-trees. CFs provided improved expressivity and scalability over the original ternary alphabet used by LCSs. Additionally, CF-based systems have successfully learned previously intractable problems, e.g. 135-bit multiplexer.  Although CFs have provided increased scalability, they suffer from a structural weakness. As the problem scales, the chains of CFs grow to intractable lengths. This means that at some point the LCS will stop learning. In addition, CFs were originally meant to scale to more complex problems in the same domain. However, it is advantageous to compile cross-domain solutions, as the regularities of a problem might be from different domains to that expressed by the data.  The proposed thesis is that a CF-based LCS can scale to complex problems by reusing learned solutions of problems as functions at the inner nodes of CFs together with compaction and Layered Learning. The overall goal is divided into the following three sub-goals: reuse learned functionality from smaller problems in the root nodes of CF sub-trees, identify a compaction technique that facilitates reduced solution size for improved evaluation time of CFs and develop a layered learning methodology for a CF system, which will be demonstrated by learning a general solution to an intractable problem, i.e. n-bit Multiplexer.  In this novel work, Code Fragments are extended to include learned functionality at the root nodes of the sub-trees in a technique known as XCSCF². A new compaction technique is designed, which produces an equivalent set of ternary rules from CF rules. This technique is known as XCSCF3. The work culminates with a new technique XCSCF*, which combines Layered Learning, Code Fragments and Transfer Learning (TL) of knowledge and functionality to produce scalable and general solutions, i.e. to the n-bit multiplexer problem.  The novel ideas are tested with the multiplexer and hidden multiplexer problems. These problems are chosen because they are difficult due to epistasis, sparsity and non-linearity. Therefore they provide ample opportunity for testing the new contributions.  The thesis work has shown that CFs can be used in various ways to increase scalability and to discover solutions to complex problems. Specifically the following three contributions were produced: learned functionality was captured in LCS populations from smaller problems and was reused in the root nodes of CF sub-trees. An online compaction technique that facilitates reduced evaluation time of CFs was designed. A layered learning method to train a CF system in a manner leading to a general solution was developed. This was demonstrated through learning a solution to a previously intractable problem, i.e. the n-bit Multiplexer. The thesis concludes with suggestions for future work aimed at providing better scalability when using compaction techniques.</p>


2021 ◽  
Author(s):  
◽  
Isidro M. Alvarez

<p>Learning is an important activity through which humanity has incrementally improved accomplishing tasks by adapting knowledge and methods based on the related feedback. Although learning is natural to humans, it is much more difficult to achieve in the technological world as tasks are often learned in isolation. Software is capable of learning novel techniques and algorithms in order to solve these basic, individual problems, however transferring said knowledge to other problems in the same or related domains presents challenges. Solutions often cannot be enumerated to discover the best one as many problems of interest can be intractable in terms of the resources needed to successfully complete them. However, many such problems contain key building blocks of knowledge that can be leveraged to achieve a suitable solution. These building blocks encapsulate important structural regularities of the problem. A technique that can learn these regularities without enumeration,may produce general solutions that apply to similar problems of any length. This implies reusing learned information.  In order to reuse learned blocks of knowledge, it is important that a program be scalable and flexible. This requires a program capable of taking knowledge from a previous task and applying it to a more complex problem or a problem with a similar pattern. This is anticipated to enable the program to complete the new task in a practical amount of time and with reasonable amounts of resources.  In machine learning, the degree of human intervention in solving problems is often important in many tasks. It is generally necessary for a human to provide input to direct and improve learning. In the field of Developmental Learning there is the idea known as the Threshold Concept (TC). A TC is transformative information which advocates learning. TCs are important because without them, the learner cannot progress. In addition, TCs need to be learned in a particular order, much like a curriculum, thus providing the student with viable progress towards learning more difficult ideas at a faster pace than otherwise. Therefore, human input to a learning algorithm can be to partition a problem into constituent subproblems. This is a principal concept of Layered Learning (LL),where a sequence of sub-problems are learned. The sub-problems are self-contained stages which have been separated by a human. This technique is necessary for tasks in which learning a direct mapping from inputs to outputs is intractable given existing learning algorithms.  One of the first artificial learning systems developed is Learning Classifier Systems (LCSs). Past work has extended LCSs to provide more expressivity by using richer representations. One such representation is tree-based and is common to the Genetic Programming (GP) technique. GP is part of the Evolutionary Computation (EC) paradigm and produces solutions represented by trees. The tree nodes can contain functions, and leaf nodes problem features, giving GP a rich representation. A more recent technique is Code Fragments (CFs). CFs are GP-like sub-trees with an initial maximum height of two. Initially, CFs contained hard-coded functions at the root nodes and problem features or previously learned CFs at the leaf nodes of the sub-trees. CFs provided improved expressivity and scalability over the original ternary alphabet used by LCSs. Additionally, CF-based systems have successfully learned previously intractable problems, e.g. 135-bit multiplexer.  Although CFs have provided increased scalability, they suffer from a structural weakness. As the problem scales, the chains of CFs grow to intractable lengths. This means that at some point the LCS will stop learning. In addition, CFs were originally meant to scale to more complex problems in the same domain. However, it is advantageous to compile cross-domain solutions, as the regularities of a problem might be from different domains to that expressed by the data.  The proposed thesis is that a CF-based LCS can scale to complex problems by reusing learned solutions of problems as functions at the inner nodes of CFs together with compaction and Layered Learning. The overall goal is divided into the following three sub-goals: reuse learned functionality from smaller problems in the root nodes of CF sub-trees, identify a compaction technique that facilitates reduced solution size for improved evaluation time of CFs and develop a layered learning methodology for a CF system, which will be demonstrated by learning a general solution to an intractable problem, i.e. n-bit Multiplexer.  In this novel work, Code Fragments are extended to include learned functionality at the root nodes of the sub-trees in a technique known as XCSCF². A new compaction technique is designed, which produces an equivalent set of ternary rules from CF rules. This technique is known as XCSCF3. The work culminates with a new technique XCSCF*, which combines Layered Learning, Code Fragments and Transfer Learning (TL) of knowledge and functionality to produce scalable and general solutions, i.e. to the n-bit multiplexer problem.  The novel ideas are tested with the multiplexer and hidden multiplexer problems. These problems are chosen because they are difficult due to epistasis, sparsity and non-linearity. Therefore they provide ample opportunity for testing the new contributions.  The thesis work has shown that CFs can be used in various ways to increase scalability and to discover solutions to complex problems. Specifically the following three contributions were produced: learned functionality was captured in LCS populations from smaller problems and was reused in the root nodes of CF sub-trees. An online compaction technique that facilitates reduced evaluation time of CFs was designed. A layered learning method to train a CF system in a manner leading to a general solution was developed. This was demonstrated through learning a solution to a previously intractable problem, i.e. the n-bit Multiplexer. The thesis concludes with suggestions for future work aimed at providing better scalability when using compaction techniques.</p>


2021 ◽  
Vol 81 (11) ◽  
Author(s):  
Vsevolod R. Ivanov ◽  
Sergey Yu. Vernov

AbstractWe consider modified gravity cosmological models that can be transformed into two-field chiral cosmological models by the conformal metric transformation. For the $$R^2$$ R 2 gravity model with an additional scalar field and the corresponding two-field model with the cosmological constant and nonstandard kinetic part of the action, the general solutions have been obtained in the spatially flat FLRW metric. We analyze the correspondence of the cosmic time solutions obtained and different possible evolutions of the Hubble parameters in the Einstein and Jordan frames.


2021 ◽  
Vol 6 (11) ◽  
Author(s):  
A. Castillo-Castellanos ◽  
S. Le Dizès ◽  
E. Durán Venegas

2021 ◽  
Author(s):  
RuKai Huang ◽  
Sheng hu Ding ◽  
Xin Zhang ◽  
Xing Li

Abstract Based on three-dimensional (3D) general solutions for one-dimensional (1D) hexagonal piezoelectric quasicrystals (PEQCs), this paper studied the frictional contact problem of 1D-hexagonal PEQCs layer. The frequency response functions (FRFs) for 1D-hexagonal PEQCs layer are analytically derived by applying double Fourier integral transforms to the general solutions and boundary conditions, which are consequently converted to the corresponding influence coefficients (ICs). The conjugate gradient method (CGM) is used to obtain the unknown pressure distribution, while the discrete convolution-fast Fourier transform technique (DC-FFT) is applied to calculate the displacements and stresses of phonon and phason, electric potentials and electric displacements. Numerical results are given to reveal the influences of material parameters and loading conditions on the contact behavior. The obtained 3D contact solutions are not only helpful further analysis and understanding of the coupling characteristics of phonon, phason and electric field, but also provide a reference basis for experimental analysis and material development.


Author(s):  
M.H. Hamdan ◽  
S.M. Alzahrani ◽  
M.S. Abu Zaytoon ◽  
S. Jayyousi Dajani

Inhomogeneous Airy’s and Generalized Airy’s equations with initial and boundary date are considered in this work. Solutions are obtained for constant and variable forcing functions, and general solutions are expressed in terms of Standard and Generalized Nield-Kuznetsov functions of the first- and second-kinds. Series representations of these functions and their efficient computation methodologies are presented with examples.


2021 ◽  
Vol 5 (3) ◽  
pp. 117
Author(s):  
Briceyda B. Delgado ◽  
Jorge E. Macías-Díaz

In this work, we investigate analytically the solutions of a nonlinear div-curl system with fractional derivatives of the Riemann–Liouville or Caputo types. To this end, the fractional-order vector operators of divergence, curl and gradient are identified as components of the fractional Dirac operator in quaternionic form. As one of the most important results of this manuscript, we derive general solutions of some non-homogeneous div-curl systems that consider the presence of fractional-order derivatives of the Riemann–Liouville or Caputo types. A fractional analogous to the Teodorescu transform is presented in this work, and we employ some properties of its component operators, developed in this work to establish a generalization of the Helmholtz decomposition theorem in fractional space. Additionally, right inverses of the fractional-order curl, divergence and gradient vector operators are obtained using Riemann–Liouville and Caputo fractional operators. Finally, some consequences of these results are provided as applications at the end of this work.


Sign in / Sign up

Export Citation Format

Share Document