scholarly journals SDNOFS: Software Defined Networking with Openflow Switches & BCN-ECN with ALTQ for Congestion Avoidance

2020 ◽  
Vol 8 (5) ◽  
pp. 3710-3719

High-performance computing cluster in a cloud environment. High-performance computing (HPC) helps scientists and researchers to solve complex problems involving multiple computational capabilities. The main reason for using a message passing model is to promote application development, porting, and execution on the variety of parallel computers that can support the paradigm. Since congestion avoidance is critical for the efficient use of different applications, an efficient method for congestion management in software-defined networks based on Open Flow protocol has been presented. This paper proposed two methods; initially, to avoid the congestion problem used by Software Defined Networks (SDN) with open flow switches, this method was originally defined as a communication protocol in SDN environments which allows the SDN controller to interact directly with the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisorbased), so that it could better adapt to changing business requirements.. Second, to enhance the quality of service and avoid the congestion problem used BCN-ECN with ALTQ. While comparing the existing method, the SDN open flow switches and BCN-ECN with ALTQ provides 98 % accuracy. Usage of these proposed methods will enhance the parameters structures delay time, level of congestion quality time and execution time

2021 ◽  
Author(s):  
Pedro Henrique Di Francia Rosso ◽  
Emilio Francesquini

The Message Passing Interface (MPI) standard is largely used in High-Performance Computing (HPC) systems. Such systems employ a large number of computing nodes. Thus, Fault Tolerance (FT) is a concern since a large number of nodes leads to more frequent failures. Two essential components of FT are Failure Detection (FD) and Failure Propagation (FP). This paper proposes improvements to existing FD and FP mechanisms to provide more portability, scalability, and low overhead. Results show that the methods proposed can achieve better or at least similar results to existing methods while providing portability to any MPI standard-compliant distribution.


SIMULATION ◽  
2019 ◽  
Vol 96 (2) ◽  
pp. 221-232
Author(s):  
Mike Mikailov ◽  
Junshan Qiu ◽  
Fu-Jyh Luo ◽  
Stephen Whitney ◽  
Nicholas Petrick

Large-scale modeling and simulation (M&S) applications that do not require run-time inter-process communications can exhibit scaling problems when migrated to high-performance computing (HPC) clusters if traditional software parallelization techniques, such as POSIX multi-threading and the message passing interface, are used. A comprehensive approach for scaling M&S applications on HPC clusters has been developed and is called “computation segmentation.” The computation segmentation is based on the built-in array job facility of job schedulers. If used correctly for appropriate applications, the array job approach provides significant benefits that are not obtainable using other methods. The parallelization illustrated in this paper becomes quite complex in its own right when applied to extremely large M&S tasks, particularly due to the need for nested loops. At the United States Food and Drug Administration, the approach has provided unsurpassed efficiency, flexibility, and scalability for work that can be performed using embarrassingly parallel algorithms.


I3+ ◽  
2015 ◽  
Vol 2 (1) ◽  
pp. 96
Author(s):  
Mauricio Ochoa Echeverría ◽  
Daniel Alejandro Soto Beltrán

La computación de alto rendimiento o HPC (High Performace Computing), hace referencia a la solución de problemas complejos por medio de un grupo de servidores, llamado clúster. El clúster en su totalidad se utiliza para la resolución de un problema individual o bien a la resolución de un grupo de problemas relacionados entre sí. Inicialmente, las soluciones facilitadas por HPC estaban limitadas a la investigación científica, pero debido a la reducción de costos y a las nuevas necesidades en los negocios, ya se puede aplicar HPC a centros de datos, simulaciones de software, procesamiento de transacciones y a cualquier resolución de problemas complejos para negocios. En relación a lo anterior la Universidad de Boyacá desarrolló el proyecto de investigación titulado “Interacción de los componentes del clúster Microsoft HPC (High Performance Computing) Server 2008 con aplicaciones MPI”. Se describe la forma en que se relacionan entre sí los componentes que hacen parte del clúster de procesamiento de información Microsoft HPC (High Performance Computing) Server 2008, para resolver un problema de alta complejidad con aplicaciones desarrolladas en MPI (Message Passing Interface, Interfaz de paso de mensajes). Para el desarrollo del proyecto un clúster de alto desempeño mediante el uso de Microsoft HPC Server 2008, utilizando máquinas virtuales, para observar su funcionamiento y determinar los reportes de rendimiento que estos sistemas ofrecen a los usuarios, para lo cual se utilizaron pruebas con aplicaciones desarrolladas en MPI. Este artículo describe: El clúster HP Server incluyendo los conceptos referentes a él (Clústeres, computación de alto desempeño y MPI), todos los requerimientos de infraestructura para el desarrollo del proyecto, el proceso de creación del clúster desde la virtualización de nodos, pasando por la creación del dominio hasta llegar a la implementación de los programas MPI y el análisis de los resultados obtenidos. 


Sign in / Sign up

Export Citation Format

Share Document