scholarly journals Survey on Cloud Based Robotics Architecture, Challenges and Applications

Author(s):  
Dr. Subarna Shakya

The emergence of the cloud computing, and the other advanced technologies has made possible the extension of the computing and the data distribution competencies of the robotics that are networked by developing an cloud based robotic architecture by utilizing both the centralized and decentralized cloud that is manages the machine to cloud and the machine to machine communication respectively. The incorporation of the robotic system with the cloud makes probable the designing of the cost effective robotic architecture that enjoys the enhanced efficiency and a heightened real- time performance. This cloud based robotics designed by amalgamation of robotics and the cloud technologies empowers the web enabled robots to access the services of cloud on the fly. The paper is a survey about the cloud based robotic architecture, explaining the forces that necessitate the robotics merged with the cloud, its application and the major concerns and the challenges endured in the robotics that is integrated with the cloud. The paper scopes to provide a detailed study on the changes influenced by the cloud computing over the industrial robots.

Author(s):  
Tetiana Zatonatska ◽  
Oleksandr Dluhopolskyi

The article describes the main characteristics, types and properties of cloud computing. The most widespread cloud technologies in Ukraine are analyzed. It is identified that the largest share among users of cloud technologies in Ukraine currently belong to large holdings, IT companies, commercial enterprises and banks, but other sectors of business are also involved in the development of these services. The aim of the article is to develop the methodology for evaluating the efficiency of cloud technologies implementation at enterprises and its experimental verification. The economic component of the cloud computing implementation at enterprises (expenditures and revenues of both cloud technology owners and users) is considered. The efficiency of using cloud computing at enterprises is proved. It is found that organizations usually do not use the power of their personal data centers to a full extent. This leads to idle equipment, extra cost on maintenance and servicing of hardware, amortization, staff salaries and etc. The feasibility of transition of enterprises to cloud computing in such situations has been proved, which considerably reduce the costs of the enterprise due to the absence of need for hardware and necessary staff to support the operation of information systems. Usability of the methodology of total cost of ownership in evaluating the effectiveness of using services for the enterprise has been proved. The proposed methodology compares the main costs of using personal data centers and the cost of using cloud computing. It is experimentally proven that in most cases, the cost of maintaining personal data center (PDC) is higher than the cost of cloud services. It is also proved that the efficiency of cloud technology operation depends on the internal structure and organization of computing processes inside the systems, as well as on external factors such as the size of enterprises-clients, industries, costs for the organization of data centers, etc. Cloud computing is an advanced technology which has future prospects and is cost-effective for both enterprise users and provider organizations.


2014 ◽  
Vol 2014 ◽  
pp. 1-27 ◽  
Author(s):  
Suleman Khan ◽  
Muhammad Shiraz ◽  
Ainuddin Wahid Abdul Wahab ◽  
Abdullah Gani ◽  
Qi Han ◽  
...  

Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.


2014 ◽  
Vol 543-547 ◽  
pp. 3100-3104
Author(s):  
Xin Huang ◽  
Yu Xing Peng ◽  
Peng Fei You

The massive data in Data centers network will be frequently accessed massive datasets for cloud services, which will lead to some new requirements and becomes an important issue for interconnection topology and data management in cloud computing. According to the cost-effective, the paper proposes a new interconnection network MyHeawood for cloud computing. MyHeawood is constructed by small switches and servers with dual-port NIC according to recursive method. The data placement strategy in MyHeawood is a hashing algorithm based on the family of hash functions. MyHeawood uses three replicas strategy base on master copy, which is allocated in different sub layer to improve the reliability of data.


2013 ◽  
Vol 3 (2) ◽  
pp. 21-34
Author(s):  
Sumedha Chauhan ◽  
Aparna Raman ◽  
N.P. Singh

Cloud computing as a disruptive technology has given a chance to explore computing as a utility. The pay as you go model provides a flexible model to optimize cost. For different needs, cloud computing offers different models and services to balance the cost, time and resources. Faster communication is the need of each academic institute today to facilitate a good learning environment in a shorter and effective time frame. Email as a medium of communication gives a pace and substance to academic needs, especially in business schools. This paper aims to present a comparative analysis of the costs (on premises vs. cloud) for email implementation. Google apps for education have been considered for cloud based email service. Results show that the net present value (NPV) of cost for on premises infrastructure is more than NPV of cost for cloud based email service. This suggests cloud based email service is a cost effective solution for Indian B-schools to adopt.


2014 ◽  
Author(s):  
Seyhan Yazar ◽  
George EC Gooden ◽  
David A Mackey ◽  
Alex Hewitt

A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95%CI: 27.5-78.2) for E.coli and 53.5% (95%CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95%CI: 211.5-303.1) and 173.9% (95%CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.


Author(s):  
Maxim Schnjakin ◽  
Christoph Meinel

Cloud Computing as a service-on-demand architecture has grown in importance over the previous few years. One driver of its growth is the ever-increasing amount of data that is supposed to outpace the growth of storage capacity. The usage of cloud technology enables organizations to manage their data with low operational expenses. However, the benefits of cloud computing come along with challenges and open issues such as security, reliability, and the risk to become dependent on a provider for its service. In general, a switch of a storage provider is associated with high costs of adapting new APIs and additional charges for inbound and outbound bandwidth and requests. In this chapter, the authors present a system that improves availability, confidentiality, and reliability of data stored in the cloud. To achieve this objective, the authors encrypt users' data and make use of the RAID-technology principle to manage data distribution across cloud storage providers. Further, they discuss the security functionality and present a proof-of-concept experiment for the application to evaluate the performance and cost effectiveness of the approach. The authors deploy the application using eight commercial cloud storage repositories in different countries. The approach allows users to avoid vendor lock-in and reduces significantly the cost of switching providers. They also observe that the implementation improved the perceived availability and, in most cases, the overall performance when compared with individual cloud providers. Moreover, the authors estimate the monetary costs to be competitive to the cost of using a single cloud provider.


2015 ◽  
pp. 1999-2021
Author(s):  
Maxim Schnjakin ◽  
Christoph Meinel

Cloud Computing as a service-on-demand architecture has grown in importance over the previous few years. One driver of its growth is the ever-increasing amount of data that is supposed to outpace the growth of storage capacity. The usage of cloud technology enables organizations to manage their data with low operational expenses. However, the benefits of cloud computing come along with challenges and open issues such as security, reliability, and the risk to become dependent on a provider for its service. In general, a switch of a storage provider is associated with high costs of adapting new APIs and additional charges for inbound and outbound bandwidth and requests. In this chapter, the authors present a system that improves availability, confidentiality, and reliability of data stored in the cloud. To achieve this objective, the authors encrypt users' data and make use of the RAID-technology principle to manage data distribution across cloud storage providers. Further, they discuss the security functionality and present a proof-of-concept experiment for the application to evaluate the performance and cost effectiveness of the approach. The authors deploy the application using eight commercial cloud storage repositories in different countries. The approach allows users to avoid vendor lock-in and reduces significantly the cost of switching providers. They also observe that the implementation improved the perceived availability and, in most cases, the overall performance when compared with individual cloud providers. Moreover, the authors estimate the monetary costs to be competitive to the cost of using a single cloud provider.


2017 ◽  
Author(s):  
Jacob M. Luber ◽  
Braden T. Tierney ◽  
Evan M. Cofer ◽  
Chirag J. Patel ◽  
Aleksandar D. Kostic

AbstractAcross biology we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective, and scalable framework that uses linear programming (LP) to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis while maximizing its efficiency and speed. As a test, we used Aether to de novo assemble 1572 metagenomic samples, a task it completed in merely 13 hours with cost savings of approximately 80% relative to comparable methods.


2020 ◽  
Vol 56 (6) ◽  
pp. 646-674 ◽  
Author(s):  
Gethin Llewelyn ◽  
Andrew Rees ◽  
Christian A Griffiths ◽  
Steffen G. Scholz

Injection moulding is a well-established replication process for the cost-effective manufacture of polymer-based components. The process has different applications in fields such as medical, automotive and aerospace. To expand the use of polymers to meet growing consumer demands for increased functionality, advanced injection moulding processes have been developed that modifies the polymer to create microcellular structures. Through the creation of microcellular materials, additional functionality can be gained through polymer component weight and processing energy reduction. Microcellular injection moulding shows high potential in creating innovation green manufacturing platforms. This review article aims to present the significant developments that have been achieved in different aspects of microcellular injection moulding. Aspects covered include core-back, gas counter pressure, variable thermal tool moulding and other advanced technologies. The resulting characteristics of creating microcellular injection moulding components through both plasticising agents and nucleating agents are presented. In addition, the article highlights potential areas for research exploitation. In particular, acoustic and thermal applications, nano-cellular injection moulding parts and developments of more accurate simulations.


Author(s):  
James F. Mancuso

IBM PC compatible computers are widely used in microscopy for applications ranging from control to image acquisition and analysis. The choice of IBM-PC based systems over competing computer platforms can be based on technical merit alone or on a number of factors relating to economics, availability of peripherals, management dictum, or simple personal preference.IBM-PC got a strong “head start” by first dominating clerical, document processing and financial applications. The use of these computers spilled into the laboratory where the DOS based IBM-PC replaced mini-computers. Compared to minicomputer, the PC provided a more for cost-effective platform for applications in numerical analysis, engineering and design, instrument control, image acquisition and image processing. In addition, the sitewide use of a common PC platform could reduce the cost of training and support services relative to cases where many different computer platforms were used. This could be especially true for the microscopists who must use computers in both the laboratory and the office.


Sign in / Sign up

Export Citation Format

Share Document