scholarly journals Cloud and Edge Computing for Developing Smart Factory Models using a iFogSim Wrapper: Transportation Management System (TMS) Case Study

Author(s):  
Dhairya Patel ◽  
Sabah Mohammed

<p><b>The given literature focuses on developing a Smart Factory model based on Cloud and Edge computing used to develop Transportation Management System(TMS) using a iFogSim wrapper. Cloud computing identifies data centres for users and offer computer system services on-demand, including data storage and processing power, without direct active user management. In the smart industry, several devices are connected together across the internet, where vast volumes of data are collected during the entire process of output. Thus, to handle this data smart factory based on cloud and edge computing is used. The intelligent cloud-based factory offers some facility like large scale analysis of data. Concepts like fog and edge computing play a significant role in extending data storage and network capacities in the cloud that addresses some challenges, such as over-full bandwidth and latency. The literature also focuses on the implementation of TMS using the iFogSim Simulator. The simulator provides efficient execution of TMS by showing the amount of resources used which gives an idea regarding optimum use of resources. All types of data related to TMS is obtained at cloud by using smart factory like object location, time taken and energy consumption. To implement the TMS we have created a topology which displays various devices connected to the cloud which gives necessary information regarding the ongoing transportation simulation.</b></p>

2020 ◽  
Author(s):  
Dhairya Patel ◽  
Sabah Mohammed

<p><b>The given literature focuses on developing a Smart Factory model based on Cloud and Edge computing used to develop Transportation Management System(TMS) using a iFogSim wrapper. Cloud computing identifies data centres for users and offer computer system services on-demand, including data storage and processing power, without direct active user management. In the smart industry, several devices are connected together across the internet, where vast volumes of data are collected during the entire process of output. Thus, to handle this data smart factory based on cloud and edge computing is used. The intelligent cloud-based factory offers some facility like large scale analysis of data. Concepts like fog and edge computing play a significant role in extending data storage and network capacities in the cloud that addresses some challenges, such as over-full bandwidth and latency. The literature also focuses on the implementation of TMS using the iFogSim Simulator. The simulator provides efficient execution of TMS by showing the amount of resources used which gives an idea regarding optimum use of resources. All types of data related to TMS is obtained at cloud by using smart factory like object location, time taken and energy consumption. To implement the TMS we have created a topology which displays various devices connected to the cloud which gives necessary information regarding the ongoing transportation simulation.</b></p>


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhi-guang Jiang ◽  
Xiao-tian Shi

The intelligent transportation system under the big data environment is the development direction of the future transportation system. It effectively integrates advanced information technology, data communication transmission technology, electronic sensing technology, control technology, and computer technology and applies them to the entire ground transportation management system to establish a real-time, accurate, and efficient comprehensive transportation management system that works on a large scale and in all directions. Intelligent video analysis is an important part of smart transportation. In order to improve the accuracy and time efficiency of video retrieval schemes and recognition schemes, this article firstly proposes a segmentation and key frame extraction method for video behavior recognition, using a multi-time scale dual-stream network to extract video features, improving the efficiency and efficiency of video behavior detection. On this basis, an improved algorithm for vehicle detection based on Faster R-CNN is proposed, and the Faster R-CNN network feature extraction layer is improved by using the principle of residual network, and a hole convolution is added to the network to filter out the redundant features of high-resolution video images to improve the problem of vehicle missed detection in the original algorithm. The experimental results show that the key frame extraction technology combined with the optimized Faster R-CNN algorithm model greatly improves the accuracy of detection and reduces the leakage. The detection rate is satisfactory.


Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

The latest advances in network and distributedsystem technologies now allow integration of a vast variety of services with almost unlimited processing power, using large amounts of data. Sharing of resources is often viewed as the key goal for distributed systems, and in this context the sharing of stored data appears as the most important aspect of distributed resource sharing. Scientific applications are the first to take advantage of such environments as the requirements of current and future high performance computing experiments are pressing, in terms of even higher volumes of issued data to be stored and managed. While these new environments reveal huge opportunities for large-scale distributed data storage and management, they also raise important technical challenges, which need to be addressed. The ability to support persistent storage of data on behalf of users, the consistent distribution of up-to-date data, the reliable replication of fast changing datasets or the efficient management of large data transfers are just some of these new challenges. In this chapter we discuss how the existing distributed computing infrastructure is adequate for supporting the required data storage and management functionalities. We highlight the issues raised from storing data over large distributed environments and discuss the recent research efforts dealing with challenges of data retrieval, replication and fast data transfers. Interaction of data management with other data sensitive, emerging technologies as the workflow management is also addressed.


2011 ◽  
Vol 1346 ◽  
Author(s):  
Hayri E. Akin ◽  
Dundar Karabay ◽  
Allen P. Mills ◽  
Cengiz S. Ozkan ◽  
Mihrimah Ozkan

ABSTRACTDNA Computing is a rapidly-developing interdisciplinary area which could benefit from more experimental results to solve problems with the current biological tools. In this study, we have integrated microelectronics and molecular biology techniques for showing the feasibility of Hopfield Neural Network using DNA molecules. Adleman’s seminal paper in 1994 showed that DNA strands using specific molecular reactions can be used to solve the Hamiltonian Path Problem. This accomplishment opened the way for possibilities of massively parallel processing power, remarkable energy efficiency and compact data storage ability with DNA. However, in various studies, small departures from the ideal selectivity of DNA hybridization lead to significant undesired pairings of strands and that leads to difficulties in schemes for implementing large Boolean functions using DNA. Therefore, these error prone reactions in the Boolean architecture of the first DNA computers will benefit from fault tolerance or error correction methods and these methods would be essential for large scale applications. In this study, we demonstrate the operation of six dimensional Hopfield associative memory storing various memories as an archetype fault tolerant neural network implemented using DNA molecular reactions. The response of the network suggests that the protocols could be scaled to a network of significantly larger dimensions. In addition the results are read on a Silicon CMOS platform exploiting the semiconductor processing knowledge for fast and accurate hybridization rates.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Anita Hatamian ◽  
Mohammad Bagher Tavakoli ◽  
Masoud Moradkhani

Families, physicians, and hospital environments use remote patient monitoring (RPM) technologies to remotely monitor a patient’s vital signs, reduce visit time, reduce hospital costs, and improve the quality of care. The Internet of Medical Things (IoMT) is provided by applications that provide remote access to patient’s physiological data. The Internet of Medical Things (IoMT) tools basically have a user interface, biosensor, and Internet connectivity. Accordingly, it is possible to record, transfer, store, and process medical data in a short time by integrating IoMT with the data communication infrastructure in edge computing. (Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. A common misconception is that edge and IoT are synonymous.) But, this approach faces problems with security and intrusion into users’ medical data that are confidential. Accordingly, this study presents a secure solution in order to be used in the IoT infrastructure in edge computing. In the proposed method, first the clustering process is performed effectively using information about the characteristics and interests of users. Then, the people in each cluster evaluated by using edge computing and people with higher scores are considered as influential people in their cluster, and since users with high user interaction can publish information on a large scale, it can be concluded that, by increasing user interaction, information can be disseminated on a larger scale without any intrusion and thus in a safe way in the network. In the proposed method, the average of user interactions and user scores are used as a criterion for identifying influential people in each cluster. If there is a desired number of people who are considered to start disseminating information, it is possible to select people in each cluster with a higher degree of influence to start disseminating information. According to the research results, the accuracy has increased by 0.2 and more information is published in the proposed method than the previous methods.


2021 ◽  
Vol 13 (9) ◽  
pp. 1815
Author(s):  
Xiaohua Zhou ◽  
Xuezhi Wang ◽  
Yuanchun Zhou ◽  
Qinghui Lin ◽  
Jianghua Zhao ◽  
...  

With the remarkable development and progress of earth-observation techniques, remote sensing data keep growing rapidly and their volume has reached exabyte scale. However, it's still a big challenge to manage and process such huge amounts of remote sensing data with complex and diverse structures. This paper designs and realizes a distributed storage system for large-scale remote sensing data storage, access, and retrieval, called RSIMS (remote sensing images management system), which is composed of three sub-modules: RSIAPI, RSIMeta, RSIData. Structured text metadata of different remote sensing images are all stored in RSIMeta based on a set of uniform models, and then indexed by the distributed multi-level Hilbert grids for high spatiotemporal retrieval performance. Unstructured binary image files are stored in RSIData, which provides large scalable storage capacity and efficient GDAL (Geospatial Data Abstraction Library) compatible I/O interfaces. Popular GIS software and tools (e.g., QGIS, ArcGIS, rasterio) can access data stored in RSIData directly. RSIAPI provides users a set of uniform interfaces for data access and retrieval, hiding the complex inner structures of RSIMS. The test results show that RSIMS can store and manage large amounts of remote sensing images from various sources with high and stable performance, and is easy to deploy and use.


2020 ◽  
Author(s):  
Nadezhda Rodinova ◽  
Vladimir Ostrouhov ◽  
Vladimir Bereznyakovsky ◽  
Irina Petrova

The tutorial is aimed at the problems of using outsourcing as a factor of the reorganization of business processes of an enterprise to achieve efficient use of resources and competitiveness of the enterprise. The article reveals the organizational and economic mechanism for making management decisions on the transfer of individual business processes of enterprises to outsourcing, which contributes to their operational management in the management system.


Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


Sign in / Sign up

Export Citation Format

Share Document