scholarly journals Black Hole Algorithm for Sustainable Design of Counterfort Retaining Walls

2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.

Games ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 53
Author(s):  
Roberto Rozzi

We consider an evolutionary model of social coordination in a 2 × 2 game where two groups of players prefer to coordinate on different actions. Players can pay a cost to learn their opponent’s group: if they pay it, they can condition their actions concerning the groups. We assess the stability of outcomes in the long run using stochastic stability analysis. We find that three elements matter for the equilibrium selection: the group size, the strength of preferences, and the information’s cost. If the cost is too high, players never learn the group of their opponents in the long run. If one group is stronger in preferences for its favorite action than the other, or its size is sufficiently large compared to the other group, every player plays that group’s favorite action. If both groups are strong enough in preferences, or if none of the groups’ sizes is large enough, players play their favorite actions and miscoordinate in inter-group interactions. Lower levels of the cost favor coordination. Indeed, when the cost is low, in inside-group interactions, players always coordinate on their favorite action, while in inter-group interactions, they coordinate on the favorite action of the group that is stronger in preferences or large enough.


Author(s):  
Tomoyuki Miyashita ◽  
Hiroshi Yamakawa

Abstract Recent years, financial difficulties led engineers to look for not only the efficiency of the function of a product but also the cost of its development. In order to reduce the time for the development, engineers in each discipline have to develop and improve their objectives collaboratively. Sometimes, they have to cooperate with those who have no knowledge at all for their own disciplines. Collaborative designs have been studied to solve these kinds of the problems, but most of them need some sorts of negotiation among disciplines and assumes that these negotiations will be done successfully. However, in the most cases of real designs, manager of each discipline does not want to give up his or her own objectives to stress on the other objectives. In order to carry out these negotiations smoothly, we need some sort of evaluation criteria which will show efficiency of the product considering the designs by each division and if possible, considering the products of the competitive company, too. In this study, we use Data Envelopment Analysis (DEA) to calculate the efficiency of the design and showed every decision maker the directions of the development of the design. We will call here these kinds of systems as supervisor systems and implemented these systems in computer networks that every decision maker can use conveniently. Through simple numerical examples, we showed the effectiveness of the proposed method.


Author(s):  
Alexandre Denis ◽  
Julien Jaeger ◽  
Emmanuel Jeannot ◽  
Marc Pérache ◽  
Hugo Taboada

To amortize the cost of MPI collective operations, nonblocking collectives have been proposed so as to allow communications to be overlapped with computation. Unfortunately, collective communications are more CPU-hungry than point-to-point communications and running them in a communication thread on a dedicated CPU core makes them slow. On the other hand, running collective communications on the application cores leads to no overlap. In this article, we propose placement algorithms for progress threads that do not degrade performance when running on cores dedicated to communications to get communication/computation overlap. We first show that even simple collective operations, such as those based on a chain topology, are not straightforward to make progress in background on a dedicated core. Then, we propose an algorithm for tree-based collective operations that splits the tree between communication cores and application cores. To get the best of both worlds, the algorithm runs the short but heavy part of the tree on application cores, and the long but narrow part of the tree on one or several communication cores, so as to get a trade-off between overlap and absolute performance. We provide a model to study and predict its behavior and to tune its parameters. We implemented both algorithms in the multiprocessor computing framework, which is a thread-based MPI implementation. We have run benchmarks on manycore processors such as the KNL and Skylake and get good results for both performance and overlap.


2020 ◽  
Author(s):  
Sebastian Fehrler ◽  
Moritz Janas

We study the choice of a principal to either delegate a decision to a group of careerist experts or to consult them individually and keep the decision-making power. Our model predicts a trade-off between information acquisition and information aggregation. On the one hand, the expected benefit from being informed is larger in case the experts are consulted individually. Hence, the experts either acquire the same or a larger amount of information, depending on the cost of information, than in case of delegation. On the other hand, any acquired information is better aggregated in the case of delegation, in which experts can deliberate secretly. To test the model’s key predictions, we run an experiment. The results from the laboratory confirm the predicted trade-off despite some deviations from theory on the individual level. This paper was accepted by Yan Chen, decision analysis.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2143
Author(s):  
Alex Ming Hui Wong ◽  
Masahiro Furukawa ◽  
Taro Maeda

Authentication has three basic factors—knowledge, ownership, and inherence. Biometrics is considered as the inherence factor and is widely used for authentication due to its conveniences. Biometrics consists of static biometrics (physical characteristics) and dynamic biometrics (behavioral). There is a trade-off between robustness and security. Static biometrics, such as fingerprint and face recognition, are often reliable as they are known to be more robust, but once stolen, it is difficult to reset. On the other hand, dynamic biometrics are usually considered to be more secure due to the constant changes in behavior but at the cost of robustness. In this paper, we proposed a multi-factor authentication—rhythmic-based dynamic hand gesture, where the rhythmic pattern is the knowledge factor and the gesture behavior is the inherence factor, and we evaluate the robustness of the proposed method. Our proposal can be easily applied with other input methods because rhythmic pattern can be observed, such as during typing. It is also expected to improve the robustness of the gesture behavior as the rhythmic pattern acts as a symbolic cue for the gesture. The results shown that our method is able to authenticate a genuine user at the highest accuracy of 0.9301 ± 0.0280 and, also, when being mimicked by impostors, the false acceptance rate (FAR) is as low as 0.1038 ± 0.0179.


Author(s):  
Godfrey C. Hoskins ◽  
V. Williams ◽  
V. Allison

The method demonstrated is an adaptation of a proven procedure for accurately determining the magnification of light photomicrographs. Because of the stability of modern electrical lenses, the method is shown to be directly applicable for providing precise reproducibility of magnification in various models of electron microscopes.A readily recognizable area of a carbon replica of a crossed-line diffraction grating is used as a standard. The same area of the standard was photographed in Phillips EM 200, Hitachi HU-11B2, and RCA EMU 3F electron microscopes at taps representative of the range of magnification of each. Negatives from one microscope were selected as guides and printed at convenient magnifications; then negatives from each of the other microscopes were projected to register with these prints. By deferring measurement to the print rather than comparing negatives, correspondence of magnification of the specimen in the three microscopes could be brought to within 2%.


2020 ◽  
Vol 4 (02) ◽  
pp. 34-45
Author(s):  
Naufal Dzikri Afifi ◽  
Ika Arum Puspita ◽  
Mohammad Deni Akbar

Shift to The Front II Komplek Sukamukti Banjaran Project is one of the projects implemented by one of the companies engaged in telecommunications. In its implementation, each project including Shift to The Front II Komplek Sukamukti Banjaran has a time limit specified in the contract. Project scheduling is an important role in predicting both the cost and time in a project. Every project should be able to complete the project before or just in the time specified in the contract. Delay in a project can be anticipated by accelerating the duration of completion by using the crashing method with the application of linear programming. Linear programming will help iteration in the calculation of crashing because if linear programming not used, iteration will be repeated. The objective function in this scheduling is to minimize the cost. This study aims to find a trade-off between the costs and the minimum time expected to complete this project. The acceleration of the duration of this study was carried out using the addition of 4 hours of overtime work, 3 hours of overtime work, 2 hours of overtime work, and 1 hour of overtime work. The normal time for this project is 35 days with a service fee of Rp. 52,335,690. From the results of the crashing analysis, the alternative chosen is to add 1 hour of overtime to 34 days with a total service cost of Rp. 52,375,492. This acceleration will affect the entire project because there are 33 different locations worked on Shift to The Front II and if all these locations can be accelerated then the duration of completion of the entire project will be effective


2020 ◽  
Vol 3 (1) ◽  
pp. 61
Author(s):  
Kazuhiro Aruga

In this study, two operational methodologies to extract thinned woods were investigated in the Nasunogahara area, Tochigi Prefecture, Japan. Methodology one included manual extraction and light truck transportation. Methodology two included mini-forwarder forwarding and four-ton truck transportation. Furthermore, a newly introduced chipper was investigated. As a result, costs of manual extractions within 10 m and 20 m were JPY942/m3 and JPY1040/m3, respectively. On the other hand, the forwarding cost of the mini-forwarder was JPY499/m3, which was significantly lower than the cost of manual extractions. Transportation costs with light trucks and four-ton trucks were JPY7224/m3 and JPY1298/m3, respectively, with 28 km transportation distances. Chipping operation costs were JPY1036/m3 and JPY1160/m3 with three and two persons, respectively. Finally, the total costs of methodologies one and two from extraction within 20 m to chipping were estimated as JPY9300/m3 and JPY2833/m3, respectively, with 28 km transportation distances and three-person chipping operations (EUR1 = JPY126, as of 12 August 2020).


Author(s):  
Vincent E. Castillo ◽  
John E. Bell ◽  
Diane A. Mollenkopf ◽  
Theodore P. Stank

Sign in / Sign up

Export Citation Format

Share Document